Giter VIP home page Giter VIP logo

dbnet-lite.pytorch's Introduction

  • 👋 Hi, I’m @BADBADBADBOY

BADBADBADBOY's GitHub stats

BADBADBADBOY's GitHub stats

Top Langs

Top Langs

dbnet-lite.pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbnet-lite.pytorch's Issues

model size(M)

您好啊!model size(M)请问这个是怎么得来的呢

RuntimeError: Error(s) in loading state_dict for DBNet:

when I run train_finetune,there is an error

Resuming from checkpoint.
Traceback (most recent call last):
  File "/home/DISCOVER_summer2022/zhangt/dblite/pruned/train_fintune.py", line 212, in <module>
    train_net(config)
  File "/home/DISCOVER_summer2022/zhangt/dblite/pruned/train_fintune.py", line 90, in train_net
    model.load_state_dict(checkpoint['state_dict'])
  File "/home/DISCOVER_summer2022/zhangt/.conda/envs/dbnet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1625, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DBNet:
	Missing key(s) in state_dict: "backbone.conv1.weight", "backbone.bn1.weight", "backbone.bn1.bias", "backbone.bn1.running_mean", "backbone.bn1.running_var", "backbone.layer1.0.conv1.weight", "backbone.layer1.0.bn1.weight", "backbone.layer1.0.bn1.bias", "backbone.layer1.0.bn1.running_mean", "backbone.layer1.0.bn1.running_var", "backbone.layer1.0.conv2.weight", "backbone.layer1.0.bn2.weight", "backbone.layer1.0.bn2.bias", "backbone.layer1.0.bn2.running_mean", "backbone.layer1.0.bn2.running_var", "backbone.layer1.1.conv1.weight", "backbone.layer1.1.bn1.weight", "backbone.layer1.1.bn1.bias", "backbone.layer1.1.bn1.running_mean", "backbone.layer1.1.bn1.running_var", "backbone.layer1.1.conv2.weight", "backbone.layer1.1.bn2.weight", "backbone.layer1.1.bn2.bias", "backbone.layer1.1.bn2.running_mean", "backbone.layer1.1.bn2.running_var", "backbone.layer2.0.conv1.weight", "backbone.layer2.0.bn1.weight", "backbone.layer2.0.bn1.bias", "backbone.layer2.0.bn1.running_mean", "backbone.layer2.0.bn1.running_var", "backbone.layer2.0.conv2.weight", "backbone.layer2.0.bn2.weight", "backbone.layer2.0.bn2.bias", "backbone.layer2.0.bn2.running_mean", "backbone.layer2.0.bn2.running_var", "backbone.layer2.0.downsample.0.weight", "backbone.layer2.0.downsample.1.weight", "backbone.layer2.0.downsample.1.bias", "backbone.layer2.0.downsample.1.running_mean", "backbone.layer2.0.downsample.1.running_var", "backbone.layer2.1.conv1.weight", "backbone.layer2.1.bn1.weight", "backbone.layer2.1.bn1.bias", "backbone.layer2.1.bn1.running_mean", "backbone.layer2.1.bn1.running_var", "backbone.layer2.1.conv2.weight", "backbone.layer2.1.bn2.weight", "backbone.layer2.1.bn2.bias", "backbone.layer2.1.bn2.running_mean", "backbone.layer2.1.bn2.running_var", "backbone.layer3.0.conv1.weight", "backbone.layer3.0.bn1.weight", "backbone.layer3.0.bn1.bias", "backbone.layer3.0.bn1.running_mean", "backbone.layer3.0.bn1.running_var", "backbone.layer3.0.conv2.weight", "backbone.layer3.0.bn2.weight", "backbone.layer3.0.bn2.bias", "backbone.layer3.0.bn2.running_mean", "backbone.layer3.0.bn2.running_var", "backbone.layer3.0.downsample.0.weight", "backbone.layer3.0.downsample.1.weight", "backbone.layer3.0.downsample.1.bias", "backbone.layer3.0.downsample.1.running_mean", "backbone.layer3.0.downsample.1.running_var", "backbone.layer3.1.conv1.weight", "backbone.layer3.1.bn1.weight", "backbone.layer3.1.bn1.bias", "backbone.layer3.1.bn1.running_mean", "backbone.layer3.1.bn1.running_var", "backbone.layer3.1.conv2.weight", "backbone.layer3.1.bn2.weight", "backbone.layer3.1.bn2.bias", "backbone.layer3.1.bn2.running_mean", "backbone.layer3.1.bn2.running_var", "backbone.layer4.0.conv1.weight", "backbone.layer4.0.bn1.weight", "backbone.layer4.0.bn1.bias", "backbone.layer4.0.bn1.running_mean", "backbone.layer4.0.bn1.running_var", "backbone.layer4.0.conv2.weight", "backbone.layer4.0.bn2.weight", "backbone.layer4.0.bn2.bias", "backbone.layer4.0.bn2.running_mean", "backbone.layer4.0.bn2.running_var", "backbone.layer4.0.downsample.0.weight", "backbone.layer4.0.downsample.1.weight", "backbone.layer4.0.downsample.1.bias", "backbone.layer4.0.downsample.1.running_mean", "backbone.layer4.0.downsample.1.running_var", "backbone.layer4.1.conv1.weight", "backbone.layer4.1.bn1.weight", "backbone.layer4.1.bn1.bias", "backbone.layer4.1.bn1.running_mean", "backbone.layer4.1.bn1.running_var", "backbone.layer4.1.conv2.weight", "backbone.layer4.1.bn2.weight", "backbone.layer4.1.bn2.bias", "backbone.layer4.1.bn2.running_mean", "backbone.layer4.1.bn2.running_var", "decode.head.in5.conv.weight", "decode.head.in5.bn.weight", "decode.head.in5.bn.bias", "decode.head.in5.bn.running_mean", "decode.head.in5.bn.running_var", "decode.head.in4.conv.weight", "decode.head.in4.bn.weight", "decode.head.in4.bn.bias", "decode.head.in4.bn.running_mean", "decode.head.in4.bn.running_var", "decode.head.in3.conv.weight", "decode.head.in3.bn.weight", "decode.head.in3.bn.bias", "decode.head.in3.bn.running_mean", "decode.head.in3.bn.running_var", "decode.head.in2.conv.weight", "decode.head.in2.bn.weight", "decode.head.in2.bn.bias", "decode.head.in2.bn.running_mean", "decode.head.in2.bn.running_var", "decode.head.out5.0.conv.weight", "decode.head.out5.0.bn.weight", "decode.head.out5.0.bn.bias", "decode.head.out5.0.bn.running_mean", "decode.head.out5.0.bn.running_var", "decode.head.out4.0.conv.weight", "decode.head.out4.0.bn.weight", "decode.head.out4.0.bn.bias", "decode.head.out4.0.bn.running_mean", "decode.head.out4.0.bn.running_var", "decode.head.out3.0.conv.weight", "decode.head.out3.0.bn.weight", "decode.head.out3.0.bn.bias", "decode.head.out3.0.bn.running_mean", "decode.head.out3.0.bn.running_var", "decode.head.out2.conv.weight", "decode.head.out2.bn.weight", "decode.head.out2.bn.bias", "decode.head.out2.bn.running_mean", "decode.head.out2.bn.running_var", "decode.binarize.0.weight", "decode.binarize.1.weight", "decode.binarize.1.bias", "decode.binarize.1.running_mean", "decode.binarize.1.running_var", "decode.binarize.3.weight", "decode.binarize.3.bias", "decode.binarize.4.weight", "decode.binarize.4.bias", "decode.binarize.4.running_mean", "decode.binarize.4.running_var", "decode.binarize.6.weight", "decode.binarize.6.bias", "decode.thresh.0.weight", "decode.thresh.1.weight", "decode.thresh.1.bias", "decode.thresh.1.running_mean", "decode.thresh.1.running_var", "decode.thresh.3.weight", "decode.thresh.3.bias", "decode.thresh.4.weight", "decode.thresh.4.bias", "decode.thresh.4.running_mean", "decode.thresh.4.running_var", "decode.thresh.6.weight", "decode.thresh.6.bias". 
	Unexpected key(s) in state_dict: "module.backbone.conv1.weight", "module.backbone.bn1.weight", "module.backbone.bn1.bias", "module.backbone.bn1.running_mean", "module.backbone.bn1.running_var", "module.backbone.bn1.num_batches_tracked", "module.backbone.layer1.0.conv1.weight", "module.backbone.layer1.0.bn1.weight", "module.backbone.layer1.0.bn1.bias", "module.backbone.layer1.0.bn1.running_mean", "module.backbone.layer1.0.bn1.running_var", "module.backbone.layer1.0.bn1.num_batches_tracked", "module.backbone.layer1.0.conv2.weight", "module.backbone.layer1.0.bn2.weight", "module.backbone.layer1.0.bn2.bias", "module.backbone.layer1.0.bn2.running_mean", "module.backbone.layer1.0.bn2.running_var", "module.backbone.layer1.0.bn2.num_batches_tracked", "module.backbone.layer1.1.conv1.weight", "module.backbone.layer1.1.bn1.weight", "module.backbone.layer1.1.bn1.bias", "module.backbone.layer1.1.bn1.running_mean", "module.backbone.layer1.1.bn1.running_var", "module.backbone.layer1.1.bn1.num_batches_tracked", "module.backbone.layer1.1.conv2.weight", "module.backbone.layer1.1.bn2.weight", "module.backbone.layer1.1.bn2.bias", "module.backbone.layer1.1.bn2.running_mean", "module.backbone.layer1.1.bn2.running_var", "module.backbone.layer1.1.bn2.num_batches_tracked", "module.backbone.layer2.0.conv1.weight", "module.backbone.layer2.0.bn1.weight", "module.backbone.layer2.0.bn1.bias", "module.backbone.layer2.0.bn1.running_mean", "module.backbone.layer2.0.bn1.running_var", "module.backbone.layer2.0.bn1.num_batches_tracked", "module.backbone.layer2.0.conv2.weight", "module.backbone.layer2.0.bn2.weight", "module.backbone.layer2.0.bn2.bias", "module.backbone.layer2.0.bn2.running_mean", "module.backbone.layer2.0.bn2.running_var", "module.backbone.layer2.0.bn2.num_batches_tracked", "module.backbone.layer2.0.downsample.0.weight", "module.backbone.layer2.0.downsample.1.weight", "module.backbone.layer2.0.downsample.1.bias", "module.backbone.layer2.0.downsample.1.running_mean", "module.backbone.layer2.0.downsample.1.running_var", "module.backbone.layer2.0.downsample.1.num_batches_tracked", "module.backbone.layer2.1.conv1.weight", "module.backbone.layer2.1.bn1.weight", "module.backbone.layer2.1.bn1.bias", "module.backbone.layer2.1.bn1.running_mean", "module.backbone.layer2.1.bn1.running_var", "module.backbone.layer2.1.bn1.num_batches_tracked", "module.backbone.layer2.1.conv2.weight", "module.backbone.layer2.1.bn2.weight", "module.backbone.layer2.1.bn2.bias", "module.backbone.layer2.1.bn2.running_mean", "module.backbone.layer2.1.bn2.running_var", "module.backbone.layer2.1.bn2.num_batches_tracked", "module.backbone.layer3.0.conv1.weight", "module.backbone.layer3.0.bn1.weight", "module.backbone.layer3.0.bn1.bias", "module.backbone.layer3.0.bn1.running_mean", "module.backbone.layer3.0.bn1.running_var", "module.backbone.layer3.0.bn1.num_batches_tracked", "module.backbone.layer3.0.conv2.weight", "module.backbone.layer3.0.bn2.weight", "module.backbone.layer3.0.bn2.bias", "module.backbone.layer3.0.bn2.running_mean", "module.backbone.layer3.0.bn2.running_var", "module.backbone.layer3.0.bn2.num_batches_tracked", "module.backbone.layer3.0.downsample.0.weight", "module.backbone.layer3.0.downsample.1.weight", "module.backbone.layer3.0.downsample.1.bias", "module.backbone.layer3.0.downsample.1.running_mean", "module.backbone.layer3.0.downsample.1.running_var", "module.backbone.layer3.0.downsample.1.num_batches_tracked", "module.backbone.layer3.1.conv1.weight", "module.backbone.layer3.1.bn1.weight", "module.backbone.layer3.1.bn1.bias", "module.backbone.layer3.1.bn1.running_mean", "module.backbone.layer3.1.bn1.running_var", "module.backbone.layer3.1.bn1.num_batches_tracked", "module.backbone.layer3.1.conv2.weight", "module.backbone.layer3.1.bn2.weight", "module.backbone.layer3.1.bn2.bias", "module.backbone.layer3.1.bn2.running_mean", "module.backbone.layer3.1.bn2.running_var", "module.backbone.layer3.1.bn2.num_batches_tracked", "module.backbone.layer4.0.conv1.weight", "module.backbone.layer4.0.bn1.weight", "module.backbone.layer4.0.bn1.bias", "module.backbone.layer4.0.bn1.running_mean", "module.backbone.layer4.0.bn1.running_var", "module.backbone.layer4.0.bn1.num_batches_tracked", "module.backbone.layer4.0.conv2.weight", "module.backbone.layer4.0.bn2.weight", "module.backbone.layer4.0.bn2.bias", "module.backbone.layer4.0.bn2.running_mean", "module.backbone.layer4.0.bn2.running_var", "module.backbone.layer4.0.bn2.num_batches_tracked", "module.backbone.layer4.0.downsample.0.weight", "module.backbone.layer4.0.downsample.1.weight", "module.backbone.layer4.0.downsample.1.bias", "module.backbone.layer4.0.downsample.1.running_mean", "module.backbone.layer4.0.downsample.1.running_var", "module.backbone.layer4.0.downsample.1.num_batches_tracked", "module.backbone.layer4.1.conv1.weight", "module.backbone.layer4.1.bn1.weight", "module.backbone.layer4.1.bn1.bias", "module.backbone.layer4.1.bn1.running_mean", "module.backbone.layer4.1.bn1.running_var", "module.backbone.layer4.1.bn1.num_batches_tracked", "module.backbone.layer4.1.conv2.weight", "module.backbone.layer4.1.bn2.weight", "module.backbone.layer4.1.bn2.bias", "module.backbone.layer4.1.bn2.running_mean", "module.backbone.layer4.1.bn2.running_var", "module.backbone.layer4.1.bn2.num_batches_tracked", "module.decode.head.in5.conv.weight", "module.decode.head.in5.bn.weight", "module.decode.head.in5.bn.bias", "module.decode.head.in5.bn.running_mean", "module.decode.head.in5.bn.running_var", "module.decode.head.in5.bn.num_batches_tracked", "module.decode.head.in4.conv.weight", "module.decode.head.in4.bn.weight", "module.decode.head.in4.bn.bias", "module.decode.head.in4.bn.running_mean", "module.decode.head.in4.bn.running_var", "module.decode.head.in4.bn.num_batches_tracked", "module.decode.head.in3.conv.weight", "module.decode.head.in3.bn.weight", "module.decode.head.in3.bn.bias", "module.decode.head.in3.bn.running_mean", "module.decode.head.in3.bn.running_var", "module.decode.head.in3.bn.num_batches_tracked", "module.decode.head.in2.conv.weight", "module.decode.head.in2.bn.weight", "module.decode.head.in2.bn.bias", "module.decode.head.in2.bn.running_mean", "module.decode.head.in2.bn.running_var", "module.decode.head.in2.bn.num_batches_tracked", "module.decode.head.out5.0.conv.weight", "module.decode.head.out5.0.bn.weight", "module.decode.head.out5.0.bn.bias", "module.decode.head.out5.0.bn.running_mean", "module.decode.head.out5.0.bn.running_var", "module.decode.head.out5.0.bn.num_batches_tracked", "module.decode.head.out4.0.conv.weight", "module.decode.head.out4.0.bn.weight", "module.decode.head.out4.0.bn.bias", "module.decode.head.out4.0.bn.running_mean", "module.decode.head.out4.0.bn.running_var", "module.decode.head.out4.0.bn.num_batches_tracked", "module.decode.head.out3.0.conv.weight", "module.decode.head.out3.0.bn.weight", "module.decode.head.out3.0.bn.bias", "module.decode.head.out3.0.bn.running_mean", "module.decode.head.out3.0.bn.running_var", "module.decode.head.out3.0.bn.num_batches_tracked", "module.decode.head.out2.conv.weight", "module.decode.head.out2.bn.weight", "module.decode.head.out2.bn.bias", "module.decode.head.out2.bn.running_mean", "module.decode.head.out2.bn.running_var", "module.decode.head.out2.bn.num_batches_tracked", "module.decode.binarize.0.weight", "module.decode.binarize.1.weight", "module.decode.binarize.1.bias", "module.decode.binarize.1.running_mean", "module.decode.binarize.1.running_var", "module.decode.binarize.1.num_batches_tracked", "module.decode.binarize.3.weight", "module.decode.binarize.3.bias", "module.decode.binarize.4.weight", "module.decode.binarize.4.bias", "module.decode.binarize.4.running_mean", "module.decode.binarize.4.running_var", "module.decode.binarize.4.num_batches_tracked", "module.decode.binarize.6.weight", "module.decode.binarize.6.bias", "module.decode.thresh.0.weight", "module.decode.thresh.1.weight", "module.decode.thresh.1.bias", "module.decode.thresh.1.running_mean", "module.decode.thresh.1.running_var", "module.decode.thresh.1.num_batches_tracked", "module.decode.thresh.3.weight", "module.decode.thresh.3.bias", "module.decode.thresh.4.weight", "module.decode.thresh.4.bias", "module.decode.thresh.4.running_mean", "module.decode.thresh.4.running_var", "module.decode.thresh.4.num_batches_tracked", "module.decode.thresh.6.weight", "module.decode.thresh.6.bias". 

Process finished with exit code 1

recall

你好!想请问一下,为啥训练出来的模型准确率还可以,而召回率则明显较低

关于多边形数据的验证问题

你好,想问下在eval过程中,计算多边形标签的数据集 recall和precision方式是不是没有,只是针对四边形或者(xmin,ymin,xmax,ymax)这种情况

IndexError: list index out of range

Traceback (most recent call last):
File "./pruned/prune.py", line 270, in
prune(config)
File "./pruned/prune.py", line 173, in prune
m.out_channels = prued_mask[index_conv].sum()
IndexError: list index out of range
利用稀疏训练后的模型下标越界了,刚才点错了,重新提一个issue。
train:
gpu_id: '0'
backbone: 'resnet18'
pretrained: True
HeadName: 'DB'
用的默认设置。

DB.pth.tar哪里可以下载?

train中需要config['train']['resume']: ./checkpoints/DB_resnet18_bs_16_ep_1200/DB.pth.tar,DB.pth.tar哪里可以下载呢?

训练数据集

你好!请问你上传的预训练权重是在icdar2015训练集上得到的吗?

Errors in loading state_dict for DBNet

python3 ./pruned/prune_inference.py
Traceback (most recent call last):
File "./pruned/prune_inference.py", line 134, in
result_dict = test_net(config)
File "./pruned/prune_inference.py", line 47, in test_net
model.load_state_dict(state)
File "/home/jing/.conda/envs/prob/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DBNet:
size mismatch for backbone.layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 64, 3, 3]).
size mismatch for backbone.layer1.0.bn1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for backbone.layer1.0.bn1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for backbone.layer1.0.bn1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([24]).
......
报错信息如上所示,因为太多只截取了部分。
这个是在剪枝操作完,进行fintune后,测试的时候报错了。我的prune部分配置文件如下:
pruned:
gpu_id: '0'
scale: [73, 77, 81, 85]
base_num: 8
cut_percent: 0.8
pruned_checkpoints: './pruned/checkpoint/pruned_dict.pth.tar'
checkpoints_dict: './pruned/checkpoint/pruned_dict.dict'
save_checkpoints: './pruned/checkpoint'
checkpoints: './checkpoints/DB_resnet18_bs_16_ep_1200/DB.pth.tar'
# checkpoints: './checkpoints/DB_resnet18_bs_16_ep_1500/DB.pth.tar' #
finetune_lr: 0.0005
# resume: './checkpoints/DB_resnet18_bs_16_ep_1200/DB.pth.tar'
resume: './checkpoints/DB_resnet18_bs_16_ep_1500/DB.pth.tar' #
# restore: True
restore: False #
n_epoch: 100
start_val_epoch: 40
报错代码位置为prune_inference.py文件中的47行:model.load_state_dict(state)。

关于DBhead和FPNhead性能问题?

您好!非常感谢您开源的这个工作!在学习您代码过程中有一些问题,还望可以解答,非常感谢!

在您工程中给出的结果是使用FPN比DB头的精度和召回率都高,为什么会有这样的结果?看原始论文中好像是加入DB起码可以提升2个点,是因为DB模型进行了裁剪吗?还有工程readme给出的结果超参数是如何设置的,比如训练的时候是否加入了二值图的训练?

加载预训练模型

我想要加载官方给的pre-trained-model-synthtext-resnet18预训练模型,请问可以加载吗,是要从resume传入吗

DB模型收敛的慢

显卡:1080Ti
系统:win10
框架:pytorch1.7
我用的自己的训练集180张图片,验证集20张图片,训练时也收敛,但就是收敛的非常慢,需要1000个epoch左右才能收敛。
没有使用预训练权重,每一张图片中包含的文字相同。
大家在训练DB模型时收敛的快不快呢@BADBADBADBOY

errors occurs when compling dcn module

work@work-Super-Server:~/deep_learning/ocr/DBnet-lite.pytorch/models/dcn$ sh make.sh
running build_ext
building 'deform_conv_cuda' extension
creating /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build
creating /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6
creating /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/src
Emitting ninja build file /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
1.10.0
creating build/lib.linux-x86_64-3.6
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/src/deform_conv_cuda.o /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/src/deform_conv_cuda_kernel.o -L/usr/local/lib/python3.6/dist-packages/torch/lib -L/usr/local/cuda-10.2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.6/deform_conv_cuda.cpython-36m-x86_64-linux-gnu.so
x86_64-linux-gnu-g++: error: /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/src/deform_conv_cuda.o: 没有那个文件或目录
x86_64-linux-gnu-g++: error: /home/work/deep_learning/ocr/DBnet-lite.pytorch/models/dcn/build/temp.linux-x86_64-3.6/src/deform_conv_cuda_kernel.o: 没有那个文件或目录
error: command 'x86_64-linux-gnu-g++' failed with exit status 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.