aaltovision / dgc-net Goto Github PK
View Code? Open in Web Editor NEWA PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"
License: Other
A PyTorch implementation of "DGC-Net: Dense Geometric Correspondence Network"
License: Other
DGC-Net$ python3.7 eval.py --image-data-path ~/hpatches-sequences-release
0%| | 0/59 [00:00<?, ?it/s]torch.Size([1, 225, 15, 15])
It outputs a torch.size with 0% progress bar by running the first test from the Readme. Probably not quite right, is it? It reported
GC-Net$ python3.7 eval.py --image-data-path ~/hpatches-sequences-release
0%| | 0/59 [00:00<?, ?it/s]torch.Size([1, 225, 15, 15])
Traceback (most recent call last):
File "eval.py", line 93, in
device)
File "/home/ubuntu/sticker_detection/DGC-Net/utils/evaluate.py", line 61, in calculate_epe_hpatches
estimates_grid, estimates_mask = net(source_img, target_img)
File "/home/ubuntu/anaconda3/envs/match/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/match/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/ubuntu/anaconda3/envs/match/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/sticker_detection/DGC-Net/model/net.py", line 68, in forward
sys.exit()
NameError: name 'sys' is not defined
before, which I fixed by importing sys obviously.
Thanks for your excellent work!!!!
I'm very interested in your research,but i can't get the tokyo time machine dataset with your links....
maybe my question is native,,but i'm eager to get your reply!
Datasets
Place recognition datasets:
Tokyo Time Machine: available on request. The train/val splits are provided with our code.
Tokyo 24/7: available on request here.
Pittsburgh 250k: available on request here. The train/val/test splits are provided with our code.
Tiny subset of Tokyo Time Machine (21 MB). Contains 360 images, just to be used to validate if the NetVLAD code is set up correctly.
model/net.py
for k in reversed(range(4)): ####3,2,1,0
p1, p2 = target_pyr[k], source_pyr[k]
est_map = F.interpolate(input=estimates_grid[-1], scale_factor=2, mode='bilinear', align_corners=False)
p1_w = F.grid_sample(p1, est_map.transpose(1,2).transpose(2,3))
est_map = self.__dict__['_modules']['reg_' + str(k)](x1=p1_w, x2=p2, x3=est_map)
estimates_grid.append(est_map)
In my opinion :
p1, p2 = target_pyr[k], source_pyr[k]
should be:
p1,p2 = source_pyr[k],target_pyr[k]
is it right?
if i was wrong. pls explain it briefly. I'm confused....
thanks very much!!
Additional, I think your work is outstanding .Maybe my graduation project will be closely relate to your excellent work. So I 'd like to learn about any details.
Forgive me for my poor English can’t fully express the feeling of gratitude.
Hi Melekhov. I find that the H parameters are fixed in the csv file. I'm interested in the homography augmentation parameters you used. Could you please provide some guidelines?
Hi,I want to train my own data, what can i do?
I want to know how can i get the data as homo_aff_tps_train.csv content,is there any other tool?
Apologies if my question sounds dumb!
I would like to know if it makes sense to use netvlad/dgc-net for place recognition of problems dealing with sequences of images (SLAM) over seasonal changes (no specific viewpoint changes)?
What I have in mind is Nordland dataset in which images rather suffer from illumination changes over seasons.
Cheers,
Is it Python2 or pytorch version specific?
python3 eval.py --image-data-path ~/hpatches-sequences-release
Traceback (most recent call last):
File "eval.py", line 8, in
import torchvision.transforms as transforms
File "/usr/local/lib/python3.6/dist-packages/torchvision/init.py", line 1, in
from torchvision import models
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/init.py", line 12, in
from . import detection
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/init.py", line 1, in
from .faster_rcnn import *
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/faster_rcnn.py", line 13, in
from .rpn import AnchorGenerator, RPNHead, RegionProposalNetwork
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/rpn.py", line 8, in
from . import _utils as det_utils
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/_utils.py", line 74, in
@torch.jit.script
File "/usr/local/lib/python3.6/dist-packages/torch/jit/init.py", line 364, in script
graph = _script_graph(fn, _frames_up=_frames_up + 1)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/init.py", line 359, in _script_graph
ast = get_jit_ast(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 132, in get_jit_ast
return build_def(SourceRangeFactory(source), py_ast.body[0])
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 151, in build_def
build_stmts(ctx, body))
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 123, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 123, in
stmts = [build_stmt(ctx, s) for s in stmts]
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 140, in call
return method(ctx, node)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 205, in build_Assign
rhs = build_expr(ctx, stmt.value)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 140, in call
return method(ctx, node)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 314, in build_Call
func = build_expr(ctx, expr.func)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 140, in call
return method(ctx, node)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 300, in build_Attribute
value = build_expr(ctx, expr.value)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 140, in call
return method(ctx, node)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/frontend.py", line 422, in build_Subscript
raise NotSupportedError(base.range(), "slicing multiple dimensions at the same time isn't supported yet")
torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same time isn't supported yet
proposals (Tensor): boxes to be encoded
"""
# perform some unpacking to make it JIT-fusion friendly
wx = weights[0]
wy = weights[1]
ww = weights[2]
wh = weights[3]
proposals_x1 = proposals[:, 0].unsqueeze(1)
~~~~~~~~~ <--- HERE
proposals_y1 = proposals[:, 1].unsqueeze(1)
proposals_x2 = proposals[:, 2].unsqueeze(1)
proposals_y2 = proposals[:, 3].unsqueeze(1)
reference_boxes_x1 = reference_boxes[:, 0].unsqueeze(1)
reference_boxes_y1 = reference_boxes[:, 1].unsqueeze(1)
reference_boxes_x2 = reference_boxes[:, 2].unsqueeze(1)
reference_boxes_y2 = reference_boxes[:, 3].unsqueeze(1)
obj = subpath
img1-2 = imgname
Him,Wim = Height and width
H11,H12,H13,H21,H22,H23,H31,H32,H33
what is this
Hi man, great job! Do you have some script for demo? I mean, the code for getting as input two images (source and target) and generating at the output the transformed image? If so, could you please provide it? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.