Giter VIP home page Giter VIP logo

dbnet.pytorch's Introduction

Real-time Scene Text Detection with Differentiable Binarization

note: some code is inherited from MhLiao/DB

中文解读

network

update

2020-06-07: 添加灰度图训练,训练灰度图时需要在配置里移除dataset.args.transforms.Normalize

Install Using Conda

conda env create -f environment.yml
git clone https://github.com/WenmuZhou/DBNet.pytorch.git
cd DBNet.pytorch/

or

Install Manually

conda create -n dbnet python=3.6
conda activate dbnet

conda install ipython pip

# python dependencies
pip install -r requirement.txt

# install PyTorch with cuda-10.1
# Note that you can change the cudatoolkit version to the version you want.
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

# clone repo
git clone https://github.com/WenmuZhou/DBNet.pytorch.git
cd DBNet.pytorch/

Requirements

  • pytorch 1.4+
  • torchvision 0.5+
  • gcc 4.9+

Download

TBD

Data Preparation

Training data: prepare a text train.txt in the following format, use '\t' as a separator

./datasets/train/img/001.jpg	./datasets/train/gt/001.txt

Validation data: prepare a text test.txt in the following format, use '\t' as a separator

./datasets/test/img/001.jpg	./datasets/test/gt/001.txt
  • Store images in the img folder
  • Store groundtruth in the gt folder

The groundtruth can be .txt files, with the following format:

x1, y1, x2, y2, x3, y3, x4, y4, annotation

Train

  1. config the dataset['train']['dataset'['data_path']',dataset['validate']['dataset'['data_path']in config/icdar2015_resnet18_fpn_DBhead_polyLR.yaml
  • . single gpu train
bash singlel_gpu_train.sh
  • . Multi-gpu training
bash multi_gpu_train.sh

Test

eval.py is used to test model on test dataset

  1. config model_path in eval.sh
  2. use following script to test
bash eval.sh

Predict

predict.py Can be used to inference on all images in a folder

  1. config model_path,input_folder,output_folder in predict.sh
  2. use following script to predict
bash predict.sh

You can change the model_path in the predict.sh file to your model location.

tips: if result is not good, you can change thre in predict.sh

The project is still under development.

Performance

only train on ICDAR2015 dataset

Method image size (short size) learning rate Precision (%) Recall (%) F-measure (%) FPS
SynthText-Defrom-ResNet-18(paper) 736 0.007 86.8 78.4 82.3 48
ImageNet-resnet18-FPN-DBHead 736 1e-3 87.03 75.06 80.6 43
ImageNet-Defrom-Resnet18-FPN-DBHead 736 1e-3 88.61 73.84 80.56 36
ImageNet-resnet50-FPN-DBHead 736 1e-3 88.06 77.14 82.24 27
ImageNet-resnest50-FPN-DBHead 736 1e-3 88.18 76.27 81.78 27

examples

TBD

todo

  • mutil gpu training

reference

  1. https://arxiv.org/pdf/1911.08947.pdf
  2. https://github.com/WenmuZhou/PANet.pytorch
  3. https://github.com/MhLiao/DB

If this repository helps you,please star it. Thanks.

dbnet.pytorch's People

Contributors

seekingdeep avatar wenmuzhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbnet.pytorch's Issues

找不到get_model函数, 不是很清楚config yaml中的json文件具体内容

您好,尝试运行您的代码之后,我遇到了几个问题,实在不好意思打扰您,在这里提问。这是我在GitHub第一次提问,如果提问有冒犯到您的地方,我先说声不好意思。

1、请问这个get_model函数是被您删掉了吗?因为我翻了所有文件夹都没找到

image
2、在训练这一步,根据您写的instruction, 我需要config the dataset['train']['dataset'['data_path']',dataset['validate']['dataset'['data_path']in config/icdar2015_resnet18_fpn_DBhead_polyLR.yaml

在您项目里面的config这个yaml文件里面的path,是一个json文件,不知道这个json文件的内容具体是什么呢?是test和train image的名字吗?还是image 名字和对应的?我看您dataset里面有test.txt和train.txt文件,而且文件内容就是./datasets/test/img/img_2.jpg ./datasets/test/gt/img_2.txt。 所以我有些疑惑,不知道您能不能截个json文件里面内容的图看看?实在是太感谢您了!
image

from models import get_model

(dlipy3) [alphamind@alphamind DBNet.pytorch-master]$ bash eval.sh
Traceback (most recent call last):
File "tools/eval.py", line 72, in
eval = EVAL(args.model_path)
File "tools/eval.py", line 18, in init
from models import get_model
ModuleNotFoundError: No module named 'models'

image

模型耗时较高

在训练过程和测试中发现速度较慢,batch_size设置为32的时候,训练耗时像图中这样,测试时FPS是0.244。用的icdar2015_resnet18_FPN_DBhead_polyLR.yaml。请问这个耗时正常不?感觉好慢哪,麻烦WenmuZhou和各位大神帮忙看下,谢谢
image
image

测试模型

可以提高一下你的训练后的模型吗,想试下效果

Pretrained model weights

Hi,

Thanks for the repository. Is it possible to make available the pretrained network weights/checkpoint ?

Regards,
Radu

about dataset

你好,对老哥代码风格比较喜欢所以基本上几个仓库都star了,有一个问题想问问你,
因为我看到老哥几个仓库都基本只给了icdar15数据结果,代码里dataset也只有icdar15和synthText,所以想了解一下你有对其他数据集比如一些弯曲文本的数据集做过实验吗?

训练时,反向传播错误

Traceback (most recent call last):
File "train_eval_test/train.py", line 80, in
main(config)
File "train_eval_test/train.py", line 64, in main
trainer.train()
File "/share/yongqin/experimentplan/PDBNet/trainer/base_trainer.py", line 103, in train
self.epoch_result = self._train_epoch(epoch)
File "/share/yongqin/experimentplan/PDBNet/trainer/trainer.py", line 85, in _train_epoch
loss_dict['loss'].backward()
File "/root/miniconda3/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/root/miniconda3/envs/py36/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 640, 640]], which is output 0 of SliceBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

生成shirink_map时遇到的问题

您好,我发现 make_shrink_map.py validate_polygons中 polygon_area 函数似乎不太正确,在计算水平矩形框的面积时会一直得到0,进而导致这些正确的gt框被忽视。

关于DeformConv2d

您好!
我看到您的代码里有一行关于 DeformConv2d 的引用:
from torchvision.ops import DeformConv2d
我想查看 DeformConv2d 的相关代码时,提示 “Module 'DeformConv2d' not found”,再去查阅了 torchvision 的文档里面好像也没有找到 DeformConv2d 的内容

recall not improving

I am training the model on custom dataset and recall is still 0 after 28 epochs. Am I missing anything!
Screenshot from 2020-02-22 20-44-23

训练时出现内存泄漏问题

感谢up主的无私开源。
我在训练时,发现存在内存泄漏的问题,不知道有没有好的手段定位这个问题???

PC freezes when change img_mode into GRAY

RGB trains find.
but when change to GRAY, PC freezes, mem 100%.
here are the logs:

2020-06-24 15:42:53,301 DBNet.pytorch INFO: {'arch': {'backbone': {'in_channels': 1,
                       'pretrained': False,
                       'type': 'resnet18'},
          'head': {'k': 50, 'out_channels': 2, 'type': 'DBHead'},
          'neck': {'inner_channels': 256, 'type': 'FPN'},
          'type': 'Model'},
 'dataset': {'train': {'dataset': {'args': {'data_path': ['./datasets/train.txt'],
                                            'filter_keys': ['img_path',
                                                            'img_name',
                                                            'text_polys',
                                                            'texts',
                                                            'ignore_tags',
                                                            'shape'],
                                            'ignore_tags': ['*', '###'],
                                            'img_mode': 'GRAY',
                                            'pre_processes': [{'args': [{'args': {'p': 0.5},
                                                                         'type': 'Fliplr'},
                                                                        {'args': {'rotate': [-10,
                                                                                             10]},
                                                                         'type': 'Affine'},
                                                                        {'args': {'size': [0.5,
                                                                                           3]},
                                                                         'type': 'Resize'}],
                                                               'type': 'IaaAugment'},
                                                              {'args': {'keep_ratio': True,
                                                                        'max_tries': 50,
                                                                        'size': [640,
                                                                                 640]},
                                                               'type': 'EastRandomCropData'},
                                                              {'args': {'shrink_ratio': 0.4,
                                                                        'thresh_max': 0.7,
                                                                        'thresh_min': 0.3},
                                                               'type': 'MakeBorderMap'},
                                                              {'args': {'min_text_size': 8,
                                                                        'shrink_ratio': 0.4},
                                                               'type': 'MakeShrinkMap'}],
                                            'transforms': [{'args': {},
                                                            'type': 'ToTensor'},
                                                           {'args': {'mean': [0.485,
                                                                              0.456,
                                                                              0.406],
                                                                     'std': [0.229,
                                                                             0.224,
                                                                             0.225]},
                                                            'type': 'Normalize'}]},
                                   'type': 'ICDAR2015Dataset'},
                       'loader': {'batch_size': 1,
                                  'collate_fn': '',
                                  'num_workers': 6,
                                  'pin_memory': True,
                                  'shuffle': True}},
             'validate': {'dataset': {'args': {'data_path': ['./datasets/test.txt'],
                                               'filter_keys': [],
                                               'ignore_tags': ['*', '###'],
                                               'img_mode': 'GRAY',
                                               'pre_processes': [{'args': {'resize_text_polys': False,
                                                                           'short_size': 736},
                                                                  'type': 'ResizeShortSize'}],
                                               'transforms': [{'args': {},
                                                               'type': 'ToTensor'},
                                                              {'args': {'mean': [0.485,
                                                                                 0.456,
                                                                                 0.406],
                                                                        'std': [0.229,
                                                                                0.224,
                                                                                0.225]},
                                                               'type': 'Normalize'}]},
                                      'type': 'ICDAR2015Dataset'},
                          'loader': {'batch_size': 1,
                                     'collate_fn': 'ICDARCollectFN',
                                     'num_workers': 6,
                                     'pin_memory': False,
                                     'shuffle': True}}},
 'distributed': False,
 'local_rank': 0,
 'loss': {'alpha': 1, 'beta': 10, 'ohem_ratio': 3},
 'lr_scheduler': {'args': {'warmup_epoch': 3}, 'type': 'WarmupPolyLR'},
 'metric': {'args': {'is_output_polygon': False}, 'type': 'QuadMetric'},
 'name': 'DBNet_resnet18_FPN_DBHead',
 'optimizer': {'args': {'amsgrad': True, 'lr': 0.001, 'weight_decay': 0},
               'type': 'Adam'},
 'post_processing': {'args': {'box_thresh': 0.7,
                              'max_candidates': 1000,
                              'thresh': 0.3,
                              'unclip_ratio': 1.5},
                     'type': 'SegDetectorRepresenter'},
 'trainer': {'epochs': 1200,
             'finetune_checkpoint': '',
             'log_iter': 10,
             'output_dir': '/media/1234/12/3213/dbnetpytorch/DBNet.pytorch/output',
             'resume_checkpoint': '',
             'seed': 2,
             'show_images_iter': 50,
             'tensorboard': True}}
2020-06-24 15:42:53,332 DBNet.pytorch INFO: train with device cuda and pytorch 1.4.0
2020-06-24 15:42:57,967 DBNet.pytorch INFO: train dataset has 5979 samples,5979 in dataloader, validate dataset has 300 samples,300 in dataloader

报错 KeyError: 'type'

您好 我这里报错 KeyError: 'type',具体如下:
Traceback (most recent call last):
File "tools/predict.py", line 140, in
model = Pytorch_model(args.model_path, post_p_thre=args.thre, gpu_id=0)
File "tools/predict.py", line 55, in init
self.model = build_model(config['arch'].pop('type'), **config['arch'])
KeyError: 'type'

请问可以解决吗?谢谢!

In some case, long white zone in pred.jpg is no be detected as text zone with polygons in output image.

Thank you for your sharing!
In some case, I can see white zone in pred.jpg but no polygons in the same place. why that happens?
for example:
image
the short white zone is detected as text zone with polygons in output image. but the long white zone is not detected as text zone with polygons.

I tried to set a very low --thre, but not working, output the same thing.
Is there any place checked the length of white zone?

about train

I downloaded your code and modified the training path, but it will report an error when training with a single GPU.
Traceback (most recent call last): File "tools/train.py", line 67, in <module> from utils import parse_config ModuleNotFoundError: No module named 'utils'

Why threshold map is not used at prediction process?

Hello, Thank you for the great work!

When I read the code of prediction process, I found the probability map is only used and threshold map is not used.
seg_detector_representer.py

class SegDetectorRepresenter():
    def __call__(self, batch, pred, is_output_polygon=False):
        # only uses probability map ,  threshold map `pred[:, 1, :, :]`  is ignored.
        pred = pred[:, 0, :, :] 
        segmentation = self.binarize(pred)
       ....

Is there any reason of not using threshold map?
And are there any advantages or disadvantages of this?

用自己的数据集训练,数据中的gt值,是如何变换的

整体过程描述:
用自己的数据集(.jsp格式),在labelme中,按顺时针方向,打标签,得到的全是float类型的数值,读取labelme保存的json文件,并保存到.txt的文本中,训练了约700轮次,发现recall, precision, f1全为0.0,预测的时候,三个指标也是0.0

但训练和预测时,用原有的icdar2015这个数据集,训练的test不全为0.0,预测的三个指标也全部不是0.0

对比后发现,可能是数据的问题,icdar2015中,所有数据集的gt为整数,而自己数据的gt全为浮点数

问题:
所以,labelme标注的数据是跟gt中的数据有什么不同吗,做了什么变换?什么原因导致自己数据集指标一直为0.0

train 的時候遇到錯誤

我根據步驟設定,執行singlel_gpu_train.sh,遇到以下錯誤

請問應該要如何解決,謝謝

Traceback (most recent call last):
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/compat/init.py", line 47, in tf
from tensorboard.compat import notf # pylint: disable=g-import-not-at-top
ImportError: cannot import name 'notf'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tools/train.py", line 79, in
main(config)
File "tools/train.py", line 59, in main
validate_loader=validate_loader)
File "/home/ai/Public/DBNet/DBNet.pytorch/trainer/trainer.py", line 16, in init
super(Trainer, self).init(config, model, criterion)
File "/home/ai/Public/DBNet/DBNet.pytorch/base/base_trainer.py", line 73, in init
self.writer = SummaryWriter(self.save_dir)
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 225, in init
self._get_file_writer()
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 256, in _get_file_writer
self.flush_secs, self.filename_suffix)
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 66, in init
log_dir, max_queue, flush_secs, filename_suffix)
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/summary/writer/event_file_writer.py", line 73, in init
if not tf.io.gfile.exists(logdir):
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/lazy.py", line 65, in getattr
return getattr(load_once(self), attr_name)
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/lazy.py", line 91, in wrapper
cache[arg] = f(arg)
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/lazy.py", line 51, in load_once
module = load_fn()
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorboard/compat/init.py", line 50, in tf
import tensorflow # pylint: disable=g-import-not-at-top
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorflow/init.py", line 99, in
from tensorflow_core import *
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorflow_core/init.py", line 755, in
_site_packages_dirs += [_p for _p in _sys.path if 'site-packages' in _p]
File "/home/ai/.virtualenvs/py36/lib/python3.6/site-packages/tensorflow_core/init.py", line 755, in
_site_packages_dirs += [_p for _p in _sys.path if 'site-packages' in _p]
TypeError: argument of type 'PosixPath' is not iterable

validate dataset 没用上batch

我看了下代码 validate dateset 用的preprocess 。这个貌似没在代码里面调用?
如果用batch > 1的话 就会出现图像大小不一致的问题
train dataset 用的是transformer 这个倒是调用了。

最新支持灰度图像训练版本出错

您好,首先感谢您的工作。
我在更新6月7日支持灰度图像的程序后出错,6月5日的版本没有问题。

执行命令后:python tools/train.py --config_file "config/icdar2015_resnet18_FPN_DBhead_polyLR.yaml"
可以训练,但在_on_epoch_finish()中的_eval()函数中程序出错,出错信息:
2020-06-09 10:04:54,724 DBNet.pytorch INFO: [925/10000], train_loss: 1.3573, time: 82.1458, lr: 0.000916689623980653
test model: 0%| | 0/25 [00:00<?, ?it/s]Fatal Python error: Cannot recover from stack overflow.

Thread 0x000226c4 (most recent call first):
File "D:\ProgramData\Anaconda3\lib\threading.py", line 299 in wait
File "D:\ProgramData\Anaconda3\lib\threading.py", line 551 in wait
File "D:\ProgramData\Anaconda3\lib\site-packages\tqdm_monitor.py", line 69 in run
File "D:\ProgramData\Anaconda3\lib\threading.py", line 916 in _bootstrap_inner
File "D:\ProgramData\Anaconda3\lib\threading.py", line 884 in _bootstrap

Thread 0x0008ed88 (most recent call first):

Thread 0x000313b4 (most recent call first):

Thread 0x000790d0 (most recent call first):
File "D:\ProgramData\Anaconda3\lib\threading.py", line 299 in wait
File "D:\ProgramData\Anaconda3\lib\queue.py", line 173 in get
File "D:\ProgramData\Anaconda3\lib\site-packages\tensorboard\summary\writer\event_file_writer.py", line 204 in run
File "D:\ProgramData\Anaconda3\lib\threading.py", line 916 in _bootstrap_inner
File "D:\ProgramData\Anaconda3\lib\threading.py", line 884 in _bootstrap

Current thread 0x0008ef20 (most recent call first):
File "D:\ProgramData\Anaconda3\lib\copy.py", line 146 in deepcopy
File "D:\ProgramData\Anaconda3\lib\copy.py", line 215 in _deepcopy_list
File "D:\ProgramData\Anaconda3\lib\copy.py", line 150 in deepcopy
File "D:\ProgramData\Anaconda3\lib\copy.py", line 240 in _deepcopy_dict
File "D:\ProgramData\Anaconda3\lib\copy.py", line 150 in deepcopy
File "E:\xingyueqi\vmdir\DBNet.pytorch-win\base\base_dataset.py", line 54 in getitem
File "E:\xingyueqi\vmdir\DBNet.pytorch-win\base\base_dataset.py", line 74 in getitem
File "E:\xingyueqi\vmdir\DBNet.pytorch-win\base\base_dataset.py", line 74 in getitem
File "E:\xingyueqi\vmdir\DBNet.pytorch-win\base\base_dataset.py", line 74 in getitem
File "E:\xingyueqi\vmdir\DBNet.pytorch-win\base\base_dataset.py", line 74 in getitem

经过跟踪后发现出错大概位于:
def _eval(self, epoch)中的
for i, batch in tqdm(enumerate(self.validate_loader), total=len(self.validate_loader), desc='test model'):
self.validate_loader中的信息无法正常访问,但是self.train_loader正常,但self.train_loader缺少'shape'信息。

我的环境是pytorch1.4,训练数据使用的是ICDAR2015,数据格式为:
xxx/ch4_training_images/img_1.jpg xxx/ch4_training_localization_transcription_gt/gt_img_1.txt

请问以上描述的问题的可能原因是哪个地方?谢谢。

Tensor转numpy数组速度瓶颈

@WenmuZhou
老哥
bitmap = _bitmap.cpu().numpy() # The first channel
pred = pred.cpu().detach().numpy()
前向测试的时候,这里会极大的限制速度,这个问题你怎么解决的

validation image is not resized by multiple of 32

I found that validation images are resized at class ResizeShortSize, data_loader/modules/augment.py.
The short side is resized to 736px (multiple of 32) by config setting, but long side is not guaranteed to resized by multiple of 32.
It may causes shifting of detected region and effect to the precision.
At predict.py, the input image is resized by multiple of 32.
So validation image also should be resized by multiple of 32.

Segmentation body and Segmentation head

您好,十分感谢您的贡献,鄙人有一处不是很清楚。本项目是不是在论文模型的基础上还加了FPN,我感觉segmentation body的FPN层和 segmentation head 的DBHead是重复的。
您的代码显示,在backbone输出之后经过了FPN,还通过了DBHead,这两个似乎是一样的。

why resize_text_polys=false on validation config

@WenmuZhou
Thank you for sharing this great work.

I found resize_text_polys : false on icdar2015_resnet18_FPN_DBhead_polyLR.yaml.
resize_text_polys is referenced at ResizeShortSize() for resizing validation image.

scale = self.short_size / short_edge
im = cv2.resize(im, dsize=None, fx=scale, fy=scale)
if self.resize_text_polys:
    text_polys *= scale

When image is resized, text box positions also should be updated toward resize scale.
so I suppose resize_text_polys should be true.

Is there any reason that you set resize_text_polys=false?

unnecessary value `1e-8` on calculating f1 score

There is an unnecessary value 1e-8 to avoid zero division.

utils/ocr_metric/icdar2015/quad_metric.py

fmeasure_score = 2 * precision.val * recall.val / (precision.val + recall.val + 1e-8)

Yes, 1e-8 is very small value, but it might effect to the f1 score.
I recommend to calculate like below.

f1 = 0 if (p + r) == 0 else 2 * p * r / (p + r)

PAD is not working?

I saved the output image of dataset.py -> class AlignCollate -> class NormalizePAD & ResizeNormalize, and found that PAD is not working, the output of NormalizePAD is the same of ResizeNormalize.
image
the image of PAD was only resized but not PAD with original ratio and filled by black pixels.

关于open_dataset

想问下open_dataset的格式 中,标注的polygon是否支持大于4个点,谢谢。
icdar2019 lsvt数据集中有些标注大于4个点的,我想转为open_dataset训练试试,不知道能不能用?

灰度图训练报错

您好,我拿icdar2015,icdar2015_dcn_resnet18_FPN_DBhead_polyLR.yaml里面img_mode: RGB是能正常跑起来的;但是我想尝试灰度图输入,所以把img_mode: RGB改成了img_mode: GRAY;但是会报如下错误:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/project/OCR/DBNet.pytorch/train.py", line 74, in
main(config)
File "D:/project/OCR/DBNet.pytorch/train.py", line 58, in main
trainer.train()
File "D:\project\OCR\DBNet.pytorch\base\base_trainer.py", line 103, in train
self.epoch_result = self._train_epoch(epoch)
File "D:\project\OCR\DBNet.pytorch\trainer\trainer.py", line 46, in _train_epoch
for i, batch in enumerate(self.train_loader):
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next
data = self._next_data()
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data
idx, data = self._get_data()
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data
success, data = self._try_get_data()
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 38096) exited unexpectedly

请问目前的代码怎么修改,跑通灰度图训练?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.