Giter VIP home page Giter VIP logo

densematching's Introduction

Hello there 👋 I am Prune Truong

🌱 Currently: Research Scientist at Google, in the Semantic Perception team of Federico Tombari.

🌱 Previously:

🤔 Most of my PhD work focuses on correspondence estimation, 3D reconstruction, pose estimation and novel-view rendering.

📫 Reach me for collaborations or just general questions here.

You can find me on Twitter or LinkedIn) - for my research and latest updates, check out my Google Scholar and Homepage!

densematching's People

Contributors

prunetruong avatar scott-vsi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

densematching's Issues

Details of MegaDepth for WarpC training.

Thanks for your great works.
I fail to find

'train_split': 'train_scenes_MegaDepth.txt',
'train_debug_split': 'train_debug_scenes_MegaDepth.txt',
'val_split': 'validation_scenes_MegaDepth.txt',

these files. Could you release more details to preprocess MegaDepth Dataset?

Code release about Warp Consistency for Unsupervised Learning of Dense Correspondences

Hi !
Thanks for sharing such wonderful works since GLAMpoints. All the papers and code greatly helped my research, which I am highly appreciated. I'm looking forward to see the next paper you will release in the near future (perhaps for ECCV)..

I have a question about the newly accepted paper by ICCV'21. Currently, the code seems to be not released yet. If it doesn't bother you, can i please know when will the code be available?

Thanks

Some questions about the supervise loss

Hi, I have some questions about the supervise loss as follows;
ss_loss_o, ss_stats_o = self.objective(estimated_flow_target_prime_to_target_directly, mini_batch['flow_map'], mask=mini_batch['mask'],
the mini_batch['flow_map'] is the ground-truth flow you generate in the online_triplet_creation.py (flow_gt = self.synthetic_flow_generator(mini_batch=mini_batch, training=training, net=net)) and the target image prime is warped by using the flow_gt, so the mini_batch['flow_map'] should be flow_target_to_target_prime_directly, why you caclulate the L1 distance between estimated_flow_target_prime_to_target_directly and mini_batch['flow_map'] rather than the distance between estimated_flow_target_to_target_prime_directly and mini_batch['flow_map'].
Thanks, Looking forward to your reply!

How to solve the problem of poor matching results in self-built data sets

Hi, I'm having some problems with image matching using your project. I used my own dataset, but the matching was poor, especially with a small number of iterations, and the resulting matched images were distorted. I followed the documentation, but the problem persists. I would like to ask for some suggestions to improve the quality of image matching, especially on how to avoid image distortion.
72827e7fb5fcff720111e87a4469712

ModuleNotFoundError: No module named 'third_party'

Traceback (most recent call last): File "test_models.py", line 8, in <module> from model_selection import select_model File "/DenseMatching/model_selection.py", line 1, in <module> from models.GLUNet.GLU_Net import GLUNetModel File "/DenseMatching/models/GLUNet/GLU_Net.py", line 7, in <module> from models.base_matching_net import BaseGLUMultiScaleMatchingNet, set_glunet_parameters File "/DenseMatching/models/base_matching_net.py", line 12, in <module> from third_party.GOCor.GOCor.global_gocor_modules import GlobalGOCorWithFlexibleContextAwareInitializer ModuleNotFoundError: No module named 'third_party'

I meet this error when testing on my own pairs, could you tell me why? Thanks!

Is the Warp Consistency Loss be suitable for small motion dataset?

Hi, Thanks for this wonderful project!
I want to train an optical flow network on my own dataset, but the view point changes in my dataset is small , Can the WarpC loss get a good result in such dataset? I found the dataset used in this project have a large view point changes.

about MegaDepth

Hi,how much memory storage is required for the dataset MegaDepth? How much memory is required for training the WarpC model on the dataset MegaDepth?

result = unpickler.load(), ModuleNotFoundError: No module named 'admin.local'

When running the demo_single_pair.py, there report an assert error:

Traceback (most recent call last):
File "D:/PyProjects/DenseMatching-main/demos/demo_single_pair.py", line 103, in
network, estimate_uncertainty = select_model(
File "D:\PyProjects\DenseMatching-main\model_selection.py", line 234, in select_model
network = load_network(network, checkpoint_path=checkpoint_fname)
File "D:\PyProjects\DenseMatching-main\model_selection.py", line 28, in load_network
checkpoint_dict = torch.load(checkpoint_path)
File "E:\ProgramData\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "E:\ProgramData\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py", line 851, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'admin.local'

network = load_network(network, checkpoint_path=checkpoint_fname)
checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')

I set pre_trained_models path as path_to_pre_trained_models = '../pre_trained_models/PDCNet_plus_megadepth.pth.tar'.
how to fix it?

Line 442 in PDCNet.py.

Hello, thanks for your excellent works and codes. Here, i have an issue for the line 442 in PDCNet.py. I think the codes should be "c_t=c23, c_s=c13 ", which means the source is image1 and target is image2. (Maybe I wrongly understand your code, please correct me (@_@;))

A simple correction for torch version check

Hi Prune,

Thanks for your great work!

I have noticed that you have some checks for the grid_sample function's behavior change after torch version >= 1.3

These checks look like;
if float(torch.__version__[:3]) >= 1.3:

However, if the torch version is 1.10.x, this check fails (since it compares 1.1 and 1.3 then returns False) and affects the flow values.

I compare the versions with the following lines, and it works properly.

from packaging import version
if version.parse(torch.__version__) >= version.parse("1.3"):

Loss calculation

Hi,
Thanks for this wonderful project. I would like to know whether the mask is involved in the loss calculation during training.

The maximum of the confidence is always 0.573

Hi,I test the PDCNet and visualize the confidence map with several images.I find the maximum of the confidence is always 0.573 even the query and reference are the same.Does the confidence map mean the uncertaincy of each pixel mapping to [0,1]?
Warped_query_image_PDCNet_megadepth_test_piazza_san_marco
confidence_map

Questions about testing my own images

Hi! I'm trying to use the PDC-Net to warp my own pair images, which are shown below.
chair2
chair5
However, there is a problem said
Screen Shot 2021-09-15 at 10 31 37 PM
I don't know how to solve it. I will very appreciate it if you could answer me. Thanks!

Some questions about the warped image

  1. Is it possible to only get the warped image rather than several images bond together?
  2. Is the information of the warped image, say resolution, completely comes from the query image? I mean the reference image doesn't contribute any additional information to the final result.

problem with warp if the `x` and `flo` are different sizes

I think there is a problem with utils_flow/pixel_wise_mapping.py:warp [link] if x, the image to warp, im2, and flo, the dense flow, are different sizes.

flo has shape [H_1, W_1, xy] with x-range (-W_2, W_2) and y-range (-H_2, H_2) and, as noted in the documentation, maps im2 (aka x) with shape [H_2, W_2] back to im1 with shape [H_1, W_1] (via torch.nn.functional.grid_sample). However, here flo is rescaled to [-1, 1] by H_1, W_1, not H_2, W_2 as it should be.
Perhaps this limitation is suggested by the documentation, since it says the shape of x and flo are both (H,W), but that should be made explicit and even ensured with an assert I think.

This affects several other functions: utils_flow/pixel_wise_mapping.py:warp_with_mapping, and unormalise_and_convert_mapping_to_flow, unormalise_flow_or_mapping, unnormalize, & normalize in utils_flow/flow_and_mapping_operations.py. utils_flow/pixel_wise_mapping.py:remap_using_flow_fields works as expected because it does not rescale the flow.

Used kernel for WarpCRANSACflow

Hei! Great code and organized github! Thanks for your work ;)
I'm using your weights to align images using RANSAC-flow. The only problem that I have is that i'm getting some deformation in the images. I look around your code and in your WarpC paper bud didn't find any detail about the size of the used kernel. In ransac-flow was 7. Did you use the same size?

Thanks!

About Scannet Training

Hi, I recently trained the PDCNet on Scannet dataset but get a worse result. The flow estimation is regular while the probabilistic map is predicted wrongly. Is it something different between scannet and Megadepth?

Training code release date

Thanks so much for this great work! Do you have an estimated date for the training code to be released?

Thanks!

About the download link

Thanks,

MegaDepth
We use the reconstructions provided in the D2-Net repo. You can download the undistorted reconstructions and aggregated scene information folder directly here - Google Drive.

However, this link is filed to open,
could you provided it again.

To run on newer cupy

To those, who are trying to run on later versions of cupy: just replace cupy.util.memoize with cupy.memoize

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.