gwxie / document-dewarping-with-control-points Goto Github PK
View Code? Open in Web Editor NEWDocument Dewarping with Control Points
License: MIT License
Document Dewarping with Control Points
License: MIT License
The paper claims the code and dataset will be released in this repository. Can we have an estimate of when this will be added, since the conference has now taken place?
Thank you :)
thanks!
python test.py
Traceback (most recent call last):
File "test.py", line 183, in
train(args)
File "test.py", line 41, in train
_re_date = re_date.search(args.resume.name).group(0)
AttributeError: 'str' object has no attribute 'name'
为什么您代码里面的网络结构和论文里面提出的网络结构不一样呢?您的论文提出用的是FCN编码器,但是您的代码里面用的是一段并行的网络结构,并不是FCN,请问这是怎么回事呢?
Excuse me?
Where is the config.js file located and where should the dataset be located? Run the model for the first time
i have issues when i trained model with 13k sample
Traceback (most recent call last):
File "train.py", line 319, in
<_io.BufferedReader name='/home/admin1/mnt_raid/source/caopv/dewaping/Source/Dataset/Train/color/452new_40_2_fold.gw'>
train(args)
File "train.py", line 143, in train
for i, (images, labels, segment) in enumerate(trainloader):
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1327, in _next_data
<_io.BufferedReader name='/home/admin1/mnt_raid/source/caopv/dewaping/Source/Dataset/Train/color/new_1013_19_curve.gw'>
<_io.BufferedReader name='/home/admin1/mnt_raid/source/caopv/dewaping/Source/Dataset/Train/color/new_1324_17_fold.gw'>
return self._process_data(data)
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1373, in _process_data
data.reraise()
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/_utils.py", line 461, in reraise
raise exception
_pickle.UnpicklingError: Caught UnpicklingError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/mnt/raid1/software_app/anaconda3/envs/doctr/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/mnt/raid1/source/caopv/dewaping/Document-Dewarping-with-Control-Points/Source/dataloader.py", line 147, in getitem
perturbed_data = pickle.load(f)
_pickle.UnpicklingError: pickle data was truncated
请问一下,如何评测MS-SSIM 和 LD 这两个指标呢?
Could you tell which paper do you refer in this way? Thanks@gwxie
Hello, there are some problems when I run the train.py as you said.Could you give me some help?
try:
FlatImg.validateOrTestModelV3(epoch, trian_t, validate_test='v_l4')
FlatImg.validateOrTestModelV3(epoch, 0, validate_test='t')
except:
print(' Error: validate or test')
the training output:【 Error: validate or test'】,I want to know what the function of this code ? Will this error affect the training process?
Looking forward to your reply!
i want to train the model again with 61 points,
can you teach me?
thanks
Thanks for sharing the code and dataset. The encoder-only architecture makes DDCP faster and lighter than other methods, I really like the idea. I try to reimplement the paper, however, some training details are missing in the paper.
α
and β
used for the pre-train model? In the utilsV4.py it's all equal 1
epochs=300
First of all, thank you very much for your project!
I encountered a problem when using a cpu and executing the proposed command:
python test.py --data_path_test=./your/test/data/path/ --parallel None --schema test --batch_size 1
If you use parallel None
the arg parameter parallel
will not be None; you can leave the parameter because the default value is already None: python test.py --data_path_test=./your/test/data/path/ --schema test --batch_size 1
Best regards
Constantin
作者您好,非常感谢你分享了代码和合成方案!
我想问一下,合成图片之前的原始图片,我可以获取到吗?
希望矫正之后的图片保持与原图差不多的分辨率,如何调整参数?矫正后的图片不失真。
另外,目前发现矫正后的文字发生变形,有时变得扁平,有时变瘦长,能否做到不变形?谢谢!
@gwxie I am training your module with custom dataset but loss is not decreasing so that module not converge.
what I have to do for decrease loss?
大神,预训练模型是基于全部数据集产生的吗
Hello, first of all, thank you for your previous answers!Secondly, I encountered some problems when i run the file of test.py.The test results are shown in the figure above,as you can see,the fiducial points focus on upper left corner。Thirdly,what does the model output 【segment_regress】 represent ?Could you give me some suggestions?
test时speed太慢了
I'm try my best to download it in baidu but every appoach failed.
I used your scripts for synthesizing my own dataset. However, I'm having trouble applying my dataset in training.
I followed your datasets in BaiduYun, and formed a directory as follows:
I found that the cv2.imread(im_path, flags=cv2.IMREAD_COLOR) cannot read the *.gw file in color, and I got a result like:
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/xujh/miniconda3/envs/test/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/xujh/miniconda3/envs/test/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xujh/miniconda3/envs/test/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xujh/Figure_process/Document-Dewarping-with-Control-Points/Source/dataloader.py", line 118, in getitem
im = self.resize_im(im)
File "/home/xujh/Figure_process/Document-Dewarping-with-Control-Points/Source/dataloader.py", line 173, in resize_im
im = cv2.resize(im, (992, 992), interpolation=cv2.INTER_LINEAR)
cv2.error: OpenCV(4.5.5) /io/opencv/modules/imgproc/src/resize.cpp:4052: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
I realized I miss something really important and can you give some help?
您好!在训练时出现'pickle data was truncated'错误,我重新下载了pkl文件还是出现同样的问题,想问一下是什么原因造成的呢?
完整的输出如下:
Namespace(arch='Document-Dewarping-with-Control-Points', batch_size=8, data_path_test=PosixPath('/home/alyson/PycharmProjects/Document-Dewarping-with-Control-Points/Source/dataset/fiducial1024/png'), data_path_train='/media/alyson/DataDisk1/fiducial1024/fiducial1024/fiducial1024_v1', data_path_validate='/media/alyson/DataDisk1/fiducial1024/fiducial1024/fiducial1024_v1/validate', img_shrink=None, l_rate=0.0002, n_epoch=300, optimizer='adam', output_path=PosixPath('/home/alyson/PycharmProjects/Document-Dewarping-with-Control-Points/Source/flat'), parallel=['0'], print_freq=60, resume=PosixPath('/home/alyson/PycharmProjects/Document-Dewarping-with-Control-Points/Source/ICDAR2021/2021-02-03 16:15:55/143/2021-02-03 16_15_55flat_img_by_fiducial_points-fiducial1024_v1.pkl'), schema='train')
------load DilatedResnetForFlatByFiducialPointsS2------
Loading model and optimizer from checkpoint '/home/alyson/PycharmProjects/Document-Dewarping-with-Control-Points/Source/ICDAR2021/2021-02-03 16:15:55/143/2021-02-03 16_15_55flat_img_by_fiducial_points-fiducial1024_v1.pkl'
Loaded checkpoint '2021-02-03 16_15_55flat_img_by_fiducial_points-fiducial1024_v1.pkl' (epoch 143)
数据集无法解压 使用7z
run the test.py but can't save the image
any error? do not understand
最近看了您的文章《document dewarping with control points》的开源代码,非常感谢您无私的开源工作。我关注到您的loss部分使用了Smooth L1 loss以及微分坐标的loss,这是和原文一致的。同时,我也主要到您的开源代码中同时也提供了edge loss和cross line loss, rectangle loss。请问您有试验过,加上这几种loss可以涨点吗?我的直觉想法也是edge部分的point可能会更加重要,但是没有进行过试验。
另外,文章中的图5中提到了设置超参数neighbor vertices k=5和k=17。我想问一下你有没有尝试过其他的k,k是否越大越好呢?
非常期待你的回复!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.