Giter VIP home page Giter VIP logo

bert-bilstm-crf's Introduction

你好,我是西西嘛呦. 👋

taishan1994's GitHub stats

ChatGPT

以下三个是一个系列:怎么让英文大语言模型支持中文

图神经网络

分布式训练

中文信息抽取

基于pipe的信息抽取:

基于指针网络的信息抽取:实体识别、关系抽取、事件抽取:

基于GlobalPointer的信息抽取:实体识别、关系抽取、事件抽取:

命名实体识别

关系抽取

事件抽取

文本分类

句子相似度

实体链接

指代消解

意图识别和槽位填充

知识图谱

文本生成

中文关键词提取

机器问答

中文文本纠错

模型压缩

其它

bert-bilstm-crf's People

Contributors

taishan1994 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

bert-bilstm-crf's Issues

训练时报错

大佬好,我在准备好自己的数据做微调时,数据报错:

`
Traceback (most recent call last):
File "main.py", line 190, in
main(data_name)
File "main.py", line 182, in main
train.train()
File "main.py", line 45, in train
for step, batch_data in enumerate(self.train_loader):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in next
data = self._next_data()
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [512] at entry 0 and [511] at entry 2

`
所有的requirements都按照md文件里的版本安装,请问还有哪里的代码需要调整吗,谢谢!

运行main文件报错

D:\Anaconda\Anaconda\envs\pytorch\python.exe D:/python/BERT-BILSTM-CRF-main/BERT-BILSTM-CRF-main/main.py
['O', 'B-故障设备', 'I-故障设备', 'B-故障原因', 'I-故障原因']
{'O': 0, 'B-故障设备': 1, 'I-故障设备': 2, 'B-故障原因': 3, 'I-故障原因': 4}
Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    D:\Anaconda\Anaconda\envs\pytorch\lib\site-packages\torch\nn\modules\rnn.py:62: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
    warnings.warn("dropout option adds dropout after all but last "

大佬可以帮忙看一下不

更换数据运行出错

大佬好,我用了你的数据训练模型没有问题,但我自己标了一些数据后生成的训练集总是报错,报错信息如下:
Traceback (most recent call last):
File "F:\education\aaa-education\BERT-BILSTM-CRF\main.py", line 191, in
main(data_name)
File "F:\education\aaa-education\BERT-BILSTM-CRF\main.py", line 150, in main
train_data = [json.loads(d) for d in train_data]
File "F:\education\aaa-education\BERT-BILSTM-CRF\main.py", line 150, in
train_data = [json.loads(d) for d in train_data]
File "C:\Users\lsc11232.conda\envs\deepke\lib\json_init_.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Users\lsc11232.conda\envs\deepke\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\lsc11232.conda\envs\deepke\lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 345 (char 344)

以下是我的部分数据:
{"id": "AT0001", "text": ["一", "个", "给", "定", "集", "合", "中", "的", "元", "素", "是", "互", "不", "相", "同", "的", ",", "也", "就", "是", "说", ",", "集", "合", "中", "的", "元", "素", "是", "不", "重", "复", "出", "现", "的"], "labels": ["O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O"]}
{"id": "AT0002", "text": ["集", "合", "论", "的", "基", "本", "理", "论", "创", "立", "于", "1", "9", "世", "纪", ",", "关", "于", "集", "合", "的", "最", "简", "单", "的", "说", "法", "就", "是", "在", "朴", "素", "集", "合", "论", "(", "最", "原", "始", "的", "集", "合", "论", ")", "中", "的", "定", "义", ",", "即", "集", "合", "是", "“", "确", "定", "的", "一", "堆", "东", "西", "”", ",", "集", "合", "里", "的", "“", "东", "西", "”", "则", "称", "为", "元", "素"], "labels": ["B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW"]}
{"id": "AT0003", "text": ["现", "代", "的", "集", "合", "一", "般", "被", "定", "义", "为", ":", "由", "一", "个", "或", "多", "个", "确", "定", "的", "元", "素", "所", "构", "成", "的", "整", "体"], "labels": ["O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O"]}
{"id": "AT0004", "text": ["集", "合", "是", "指", "具", "有", "某", "种", "特", "定", "性", "质", "的", "具", "体", "的", "或", "抽", "象", "的", "对", "象", "汇", "总", "而", "成", "的", "集", "体"], "labels": ["B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]}
{"id": "AT0005", "text": ["其", "中", ",", "构", "成", "集", "合", "的", "这", "些", "对", "象", "则", "称", "为", "该", "集", "合", "的", "元", "素"], "labels": ["O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "B-KNOW", "I-KNOW"]}
{"id": "AT0006", "text": ["集", "合", "中", "元", "素", "的", "数", "目", "称", "为", "集", "合", "的", "基", "数", ",", "集", "合", "A", "的", "基", "数", "记", "作", "c", "a", "r", "d", "(", "A", ")"], "labels": ["B-KNOW", "I-KNOW", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "B-KNOW", "I-KNOW", "O", "B-KNOW", "I-KNOW", "O", "B-KNOW", "I-KNOW", "O", "O", "B-KNOW", "I-KNOW", "O", "O", "O", "O", "O", "O", "O", "O", "O"]}

更换数据训练报错

大佬好,我用了你的数据训练模型没有问题,但我自己标了一些数据后生成的训练集总是报错,报错信息如下:

Traceback (most recent call last): File "main.py", line 190, in <module> main(data_name) File "main.py", line 153, in main dev_data = [json.loads(d) for d in dev_data] File "main.py", line 153, in <listcomp> dev_data = [json.loads(d) for d in dev_data] File "/root/miniconda3/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/root/miniconda3/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/root/miniconda3/lib/python3.8/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

我核对了我的文件格式,都是按照你的来写,也尝试过更换标准的json文件格式,但都是报错,我的部分数据是这种格式:

{"id": "TEX0001", "text":["6", ".", "为", "进", "一", "步", "加", "大", "增", "值", "税", "留", "抵", "退", "税", "政", "策", "实", "施", "力", "度", ",", "着", "力", "稳", "市", "场", "主", "体", "稳", "就", "业", ",", "现", "将", "扩", "大", "全", "额", "退", "还", "增", "值", "税", "留", "抵", "税", "额", "政", "策", "行", "业", "范", "围", "有", "关", "政", "策", "公", "告", "如", "下", ":"], "labels":["O", "O", "O", "O", "O", "O", "O", "O", "B-税费种类", "I-税费种类", "I-税费种类", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-税费种类", "I-税费种类", "I-税费种类", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"id": "TEX0003", "text":["8", ".", "(", "一", ")", "符", "合", "条", "件", "的", "批", "发", "零", "售", "业", "等", "行", "业", "企", "业", ",", "可", "以", "自", "2", "0", "2", "2", "年", "7", "月", "纳", "税", "申", "报", "期", "起", "向", "主", "管", "税", "务", "机", "关", "申", "请", "退", "还", "增", "量", "留", "抵", "税", "额", "。"], "labels":["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-所属行业", "I-所属行业", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"id": "TEX0004", "text":["9", ".", "(", "二", ")", "符", "合", "条", "件", "的", "批", "发", "零", "售", "业", "等", "行", "业", "企", "业", ",", "可", "以", "自", "2", "0", "2", "2", "年", "7", "月", "纳", "税", "申", "报", "期", "起", "向", "主", "管", "税", "务", "机", "关", "申", "请", "一", "次", "性", "退", "还", "存", "量", "留", "抵", "税", "额", "。"], "labels":["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-所属行业", "I-所属行业", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]}
请大佬指教

多卡训练

请问大佬,如果要改成多卡训练,使用DataParallel方法该如何修改代码呢?

原始数据如何处理

原始数据如何处理成这样"spo_list": [{"h": {"name": "空调", "pos": [5, 7]}, "t": {"name": "制冷效果差", "pos": [7, 12]}, "relation": "部件故障"}]}

运行main.py报错

C:\Users\35845\Desktop\连铸\程序\模型构建\ner命名实体识别\BERT-BiLSTM-CRF\ner>python main.py
['O', 'B-故障设备', 'I-故障设备', 'B-故障原因', 'I-故障原因']
{'O': 0, 'B-故障设备': 1, 'I-故障设备': 2, 'B-故障原因': 3, 'I-故障原因': 4}
C:\Users\35845\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\rnn.py:62: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
warnings.warn("dropout option adds dropout after all but last "
C:\Users\35845\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\optimization.py:429: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
warnings.warn(
Traceback (most recent call last):
File "main.py", line 189, in
main(data_name)
File "main.py", line 183, in main
report = train.test()
File "main.py", line 66, in test
self.model.load_state_dict(torch.load(os.path.join(self.output_dir, "pytorch_model_ner.bin"),map_location='cpu'))
File "C:\Users\35845\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for BertNer:
Unexpected key(s) in state_dict: "bert.embeddings.position_ids".
网上查了一圈还是不知道怎么解决,求大佬解惑

运行报错

['O', 'B-故障设备', 'I-故障设备', 'B-故障原因', 'I-故障原因']
{'O': 0, 'B-故障设备': 1, 'I-故障设备': 2, 'B-故障原因': 3, 'I-故障原因': 4}
Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.predictions.bias']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    F:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\rnn.py:62: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
    warnings.warn("dropout option adds dropout after all but last "
    Traceback (most recent call last):
    File "e:\硕士毕业\BERT-BLSTM-CRF\ner\main.py", line 189, in
    main(data_name)
    File "e:\硕士毕业\BERT-BLSTM-CRF\ner\main.py", line 159, in main
    model = BertNer(args)
    File "e:\硕士毕业\BERT-BLSTM-CRF\ner\model.py", line 24, in init
    self.crf = CRF(args.num_labels, batch_first=True)
    TypeError: init() got an unexpected keyword argument 'batch_first'
    大佬请问能看一下报错原因吗

代码错误

test_loader=dev_loader,是故意这样设置还是设置错了?
直接改为test_loader=test_loader会影响运行吗?
ad4e89e991c3d5f8d14104acd4a2c67

修改为处理英文数据模型,性能不佳

大佬好,借助gpt,我将您的模型修改为处理英文数据的模型,
使用公开英文数据集,这个性能在80左右,使用W2NER模型性能在90左右,
一般论文里提到,使用该模型最高性能在94左右,是超参数设置问题,还是我修改原模型原因,
处理英文数据时,使用的预训练模型是bert-base-cased,跟预训练模型的选择也有关系嘛
希望大佬帮忙解答下,在此非常感谢!

参数错误

作者您好,我在运行main文件时,出现了如图错误,结合gpt解读之后,给出原因(见图2)为缺少参数target_names(说其列表为空),我大致添加修改参数后运行还是报错,麻烦作者指点一下
12
34

dgre数据集已运行成功,修改duie数据时出错

你好,第一个数据集已成功跑通,因自己数据是多类实体,故准备调试duie数据。在调试时,感觉是缺少数据集的,如下:self.train_file = self.data_path + "ori_data/duie_train.json"
self.dev_file = self.data_path + "ori_data/duie_dev.json"
self.test_file = self.data_path + "ori_data/duie_test2.json"
self.schema_file = self.data_path + "ori_data/duie_schema.json"
上述三个数据是不存在的嘛?
感谢大佬,或者我需要将自己数据处理成那种格式,在dgre的代码上,修改代码能实现多类实体识别,而不是两类实体识别。

改为自己的数据集和标签后报错

Traceback (most recent call last):
File "D:/BERT/ner/main.py", line 189, in
main(data_name)
File "D:/BERT/ner/main.py", line 183, in main
report = train.test()
File "D:/BERT/ner/main.py", line 66, in test
self.model.load_state_dict(torch.load(os.path.join(self.output_dir, "pytorch_model_ner.bin")))
File "C:\Users\liu.conda\envs\bbc\lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for BertNer:
size mismatch for linear.weight: copying a param with shape torch.Size([5, 256]) from checkpoint, the shape in current model is torch.Size([13, 256]).
size mismatch for linear.bias: copying a param with shape torch.Size([5]) from checkpoint, the shape in current model is torch.Size([13]).
size mismatch for crf.start_transitions: copying a param with shape torch.Size([5]) from checkpoint, the shape in current model is torch.Size([13]).
size mismatch for crf.end_transitions: copying a param with shape torch.Size([5]) from checkpoint, the shape in current model is torch.Size([13]).
size mismatch for crf.transitions: copying a param with shape torch.Size([5, 5]) from checkpoint, the shape in current model is torch.Size([13, 13]).
labels文件加了几个标签之后出现的问题,估计就是喂的向量不是模型需要的,这块怎么改啊,还是要自己重新训练模型?求助大佬,找了好多设置文件没有能修改标签数量的地方。

TypeError

Traceback (most recent call last):
File "D:\python\python-workspace\BERT-BILSTM-CRF\main.py", line 189, in
main(data_name)
File "D:\python\python-workspace\BERT-BILSTM-CRF\main.py", line 159, in main
model = BertNer(args)
File "D:\python\python-workspace\BERT-BILSTM-CRF\model.py", line 25, in init
self.crf = CRF(args.num_labels, batch_first=True)
TypeError: init() got an unexpected keyword argument 'batch_first'
大佬们,这个是什么问题呢?怎么解决啊

数据集问题

你好,我想请教一下,博主自带数据集dgre运行成功,但是再换成自己数据集时,自己的数据集格式应该调成什么样的格式呢?
以下是本人数据集中的一条实例:
{"id":65521,"text":"811:北京市发展和改革委员会发文字号:京发改〔2023〕293号公布日期:2023.03.03施行日期:2023.03.03时效性:现行有效效力位阶:地方规范性文件法规类别:能源综合规定节能管理北京市发展和改革委员会关于印发数据中心项目年可再生能源利用水平核实评价技术导则(试行)的通知(京发改〔2023〕293号)各有关单位:  根据《关于进一步加强数据中心项目节能审查的若干规定》(京发改规〔2021〕4号)相关工作要求,为依据节能审查意见和节能报告做好取得节能审查批复的数据中心项目的年可再生能源利用水平核实评价工作,我们研究制定了《数据中心项目年可再生能源利用水平核实评价技术导则(试行)》,现予以印发。试行期间,如有问题和意见建议,请及时反馈。  特此通知。北京市发展和改革委员会  2023年3月3日  附件:数据中心项目年可再生能源利用水平核实评价技术导则(试行)附件预览无相关内容","entities":[{"id":473,"label":"政策主体-政策制定者","start_offset":341,"end_offset":352},{"id":483,"label":"政策过程-政策评估","start_offset":261,"end_offset":267},{"id":5917,"label":"政策主体-政策制定者","start_offset":103,"end_offset":114}],"relations":[],"Comments":[]}。

运行时报错 求助佬

['O', 'B-故障设备', 'I-故障设备', 'B-故障原因', 'I-故障原因']
{'O': 0, 'B-故障设备': 1, 'I-故障设备': 2, 'B-故障原因': 3, 'I-故障原因': 4}
Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\rnn.py:62: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
    warnings.warn("dropout option adds dropout after all but last "
    C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
    warnings.warn(
    Traceback (most recent call last):
    File "C:/Users/DELL/Desktop/BERT-BILSTM-CRF-main/main.py", line 190, in
    main(data_name)
    File "C:/Users/DELL/Desktop/BERT-BILSTM-CRF-main/main.py", line 182, in main
    train.train()
    File "C:/Users/DELL/Desktop/BERT-BILSTM-CRF-main/main.py", line 52, in train
    output = self.model(input_ids, attention_mask, labels)
    File "C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
    File "C:\Users\DELL\Desktop\BERT-BILSTM-CRF-main\model.py", line 36, in forward
    loss = -self.crf(seq_out, labels, mask=attention_mask.bool(), reduction='mean')
    File "C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1110, in call_impl
    return forward_call(*input, **kwargs)
    File "C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\TorchCRF_init
    .py", line 102, in forward
    numerator = self.compute_score(emissions, tags, mask)
    File "C:\Users\DELL\AppData\Local\Programs\Python\Python38\lib\site-packages\TorchCRF_init
    .py", line 186, in _compute_score
    score = self.start_transitions[tags[0]]
    IndexError: tensors used as indices must be long, byte or bool tensors

Process finished with exit code 1

开源协议相关

请问我可以将您的代码更改后重新发布在GitHub吗?因为注意到您没有选择开源协议,版权方面有些顾虑。

运行时报错

Traceback (most recent call last):
File "e:\git clone download\BERT-BILSTM-CRF\main.py", line 190, in
main(data_name)
File "e:\git clone download\BERT-BILSTM-CRF\main.py", line 182, in main
train.train()
File "e:\git clone download\BERT-BILSTM-CRF\main.py", line 52, in train
output = self.model(input_ids, attention_mask, labels)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "e:\git clone download\BERT-BILSTM-CRF\model.py", line 27, in forward
bert_output = self.bert(input_ids=input_ids, attention_mask=attention_mask)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\transformers\models\bert\modeling_bert.py", line 1013, in forward
encoder_outputs = self.encoder(
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\transformers\models\bert\modeling_bert.py", line 607, in forward
layer_outputs = layer_module(
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\transformers\models\bert\modeling_bert.py", line 497, in forward
self_attention_outputs = self.attention(
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\transformers\models\bert\modeling_bert.py", line 427, in forward
self_outputs = self.self(
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\transformers\models\bert\modeling_bert.py", line 286, in forward
mixed_query_layer = self.query(hidden_states)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\modules\linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "E:\miniconda3\envs\cyclegan\lib\site-packages\torch\nn\functional.py", line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)
请问这个问题该怎么解决呢?

M2芯片使用mps训练速度极慢

大佬,我把device的使用语句改成如图的mps,但是训练的速度极慢,感觉甚至比cpu慢十倍,GPU的占用也很低,但是内存占用挺多,9G左右。我测试了其他的mps使用场景是正常的,比cpu速度快好几倍。想请教下是否是我的设置问题。
Snipaste_2023-12-10_21-03-37
Snipaste_2023-12-10_21-13-45

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.