prunetruong / densematching Goto Github PK
View Code? Open in Web Editor NEWDense matching library based on PyTorch
License: GNU Lesser General Public License v2.1
Dense matching library based on PyTorch
License: GNU Lesser General Public License v2.1
python run_training.py PWarpC train_weakly_supervised_PWarpC_SFNet_spair_from_pfpascal. this problem occur when I run the above order.
Hi Prune,
Thanks for your great work!
I have noticed that you have some checks for the grid_sample function's behavior change after torch version >= 1.3
These checks look like;
if float(torch.__version__[:3]) >= 1.3:
However, if the torch version is 1.10.x, this check fails (since it compares 1.1 and 1.3 then returns False) and affects the flow values.
I compare the versions with the following lines, and it works properly.
from packaging import version
if version.parse(torch.__version__) >= version.parse("1.3"):
Hi, Thanks for this wonderful project!
I want to train an optical flow network on my own dataset, but the view point changes in my dataset is small , Can the WarpC loss get a good result in such dataset? I found the dataset used in this project have a large view point changes.
Hi !
Thanks for sharing such wonderful works since GLAMpoints. All the papers and code greatly helped my research, which I am highly appreciated. I'm looking forward to see the next paper you will release in the near future (perhaps for ECCV)..
I have a question about the newly accepted paper by ICCV'21. Currently, the code seems to be not released yet. If it doesn't bother you, can i please know when will the code be available?
Thanks
Hi,
Thanks for this wonderful project. I would like to know whether the mask is involved in the loss calculation during training.
Hi, I'm curious about that how to generate the theta for homo/tps/affine. Where should I find the code of the part?
Thanks.
Hei! Great code and organized github! Thanks for your work ;)
I'm using your weights to align images using RANSAC-flow. The only problem that I have is that i'm getting some deformation in the images. I look around your code and in your WarpC paper bud didn't find any detail about the size of the used kernel. In ransac-flow was 7. Did you use the same size?
Thanks!
To those, who are trying to run on later versions of cupy: just replace cupy.util.memoize
with cupy.memoize
Hi, thanks for your great work.
Would you release the code of PDCNet?
and what's the relationship between the estimated flow miu and the output flow y..?
Hi, I recently trained the PDCNet on Scannet dataset but get a worse result. The flow estimation is regular while the probabilistic map is predicted wrongly. Is it something different between scannet and Megadepth?
Hi,I want to use WarpC-RANSAC-Flow to warp my own pair images. Can you share the corresponding model and test code ? Thanks!
Hi,how much memory storage is required for the dataset MegaDepth? How much memory is required for training the WarpC model on the dataset MegaDepth?
Thanks for your great works.
I fail to find
'train_split': 'train_scenes_MegaDepth.txt',
'train_debug_split': 'train_debug_scenes_MegaDepth.txt',
'val_split': 'validation_scenes_MegaDepth.txt',
these files. Could you release more details to preprocess MegaDepth Dataset?
Hi, I'm having some problems with image matching using your project. I used my own dataset, but the matching was poor, especially with a small number of iterations, and the resulting matched images were distorted. I followed the documentation, but the problem persists. I would like to ask for some suggestions to improve the quality of image matching, especially on how to avoid image distortion.
Thanks so much for this great work! Do you have an estimated date for the training code to be released?
Thanks!
Hello! I trid to straightly download the undistorted reconstructions and aggregated sence information of MegaDepth through Google Dive Link, but I meet a problem 404. That might means there is something wrong
When running the demo_single_pair.py, there report an assert error:
Traceback (most recent call last):
File "D:/PyProjects/DenseMatching-main/demos/demo_single_pair.py", line 103, in
network, estimate_uncertainty = select_model(
File "D:\PyProjects\DenseMatching-main\model_selection.py", line 234, in select_model
network = load_network(network, checkpoint_path=checkpoint_fname)
File "D:\PyProjects\DenseMatching-main\model_selection.py", line 28, in load_network
checkpoint_dict = torch.load(checkpoint_path)
File "E:\ProgramData\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "E:\ProgramData\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py", line 851, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'admin.local'
network = load_network(network, checkpoint_path=checkpoint_fname)
checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
I set pre_trained_models path as path_to_pre_trained_models = '../pre_trained_models/PDCNet_plus_megadepth.pth.tar'.
how to fix it?
Thanks,
MegaDepth
We use the reconstructions provided in the D2-Net repo. You can download the undistorted reconstructions and aggregated scene information folder directly here - Google Drive.
However, this link is filed to open,
could you provided it again.
Hi, I have some questions about the supervise loss as follows;
ss_loss_o, ss_stats_o = self.objective(estimated_flow_target_prime_to_target_directly, mini_batch['flow_map'], mask=mini_batch['mask'],
the mini_batch['flow_map'] is the ground-truth flow you generate in the online_triplet_creation.py (flow_gt = self.synthetic_flow_generator(mini_batch=mini_batch, training=training, net=net)) and the target image prime is warped by using the flow_gt, so the mini_batch['flow_map'] should be flow_target_to_target_prime_directly, why you caclulate the L1 distance between estimated_flow_target_prime_to_target_directly and mini_batch['flow_map'] rather than the distance between estimated_flow_target_to_target_prime_directly and mini_batch['flow_map'].
Thanks, Looking forward to your reply!
Traceback (most recent call last): File "test_models.py", line 8, in <module> from model_selection import select_model File "/DenseMatching/model_selection.py", line 1, in <module> from models.GLUNet.GLU_Net import GLUNetModel File "/DenseMatching/models/GLUNet/GLU_Net.py", line 7, in <module> from models.base_matching_net import BaseGLUMultiScaleMatchingNet, set_glunet_parameters File "/DenseMatching/models/base_matching_net.py", line 12, in <module> from third_party.GOCor.GOCor.global_gocor_modules import GlobalGOCorWithFlexibleContextAwareInitializer ModuleNotFoundError: No module named 'third_party'
I meet this error when testing on my own pairs, could you tell me why? Thanks!
I think there is a problem with utils_flow/pixel_wise_mapping.py:warp
[link] if x
, the image to warp, im2, and flo
, the dense flow, are different sizes.
flo
has shape [H_1, W_1, xy] with x-range (-W_2, W_2) and y-range (-H_2, H_2) and, as noted in the documentation, maps im2
(aka x
) with shape [H_2, W_2] back to im1
with shape [H_1, W_1] (via torch.nn.functional.grid_sample). However, here flo
is rescaled to [-1, 1] by H_1, W_1, not H_2, W_2 as it should be.
Perhaps this limitation is suggested by the documentation, since it says the shape of x and flo are both (H,W), but that should be made explicit and even ensured with an assert I think.
This affects several other functions: utils_flow/pixel_wise_mapping.py:warp_with_mapping
, and unormalise_and_convert_mapping_to_flow
, unormalise_flow_or_mapping
, unnormalize
, & normalize
in utils_flow/flow_and_mapping_operations.py
. utils_flow/pixel_wise_mapping.py:remap_using_flow_fields
works as expected because it does not rescale the flow.
Hi,
Thanks for your excellent work. I would like to know if all the experiments in the paper are based on the fixed settings.
Thanks.
Hello, thanks for your excellent works and codes. Here, i have an issue for the line 442 in PDCNet.py. I think the codes should be "c_t=c23, c_s=c13 ", which means the source is image1 and target is image2. (Maybe I wrongly understand your code, please correct me (@_@;))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.