Giter VIP home page Giter VIP logo

oanet's People

Contributors

cortwave avatar sundw2014 avatar zjhthu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oanet's Issues

Training details

Hi,

I was just wondering if you could please clarify some inconsistencies between the paper and the provided implementation. Specifically, in the paper you state that you use learning rate of 10^-4 and that the weight alpha is 0.1 after 20k iterations. However, in the provided config the weight of the essential loss is 0.5 and the learning rate is 10^-3.

Thanks
Zan

where is yfcc_train.txt?

firstly, thanks for your great work. But when I have downloaded the YFCC100M dataset, and I try to generate the yfcc dataset, I can not find the yfcc_train.txt in this dataset.

# dump yfcc training seqs with open('yfcc_train.txt','r') as ofp: train_seqs = ofp.read().split('\n') if len(train_seqs[-1]) == 0: del train_seqs[-1]

SIFT keypoints number and descriptors in fundamental model

I tried to re-train the fundamental model using 8000 sift keypoints. However, using the default setting of in extract_feature.py , only 4000-6000 keypoints can be extracted. And the trained model performed bad compared with provided fundamental estimation model. Is there any suggestion on the setting of get more points. Thanks~

how to load contextdesc model?

when i use contextdesc-yfcc model in the demo script
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for OANet:
size mismatch for weights_init.conv1.weight: copying a param with shape torch.Size([128, 5, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 4, 1, 1]).
size mismatch for weights_iter.0.conv1.weight: copying a param with shape torch.Size([128, 7, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 6, 1, 1]).

how to use the contextdesc model ? thanks

Where is the implementation of the Spatial Correlation Layer?

Excuse me, I wonder where is the implementation of the Spatial Correlation Layer.

In your paper, you added the Spatial Correlation Layer into the original PointnetCN structure to:

explicitly model relation between different nodes and capture the complex global context

The graph in your paper displayed as two transpose charts but I can not find it in the code.
Could you please tell me where is the implementation?
Thanks.

Lincense and Citations

Hi Zixin,

I just noticed that there is a bit of an issue with the license notification and citation acknowledgement for the CVPR 2018 paper. Could you please add them? EPFL and UVic have copyrights to the code and they are only released for research purposes. Thus, you are actually violating it by releasing your code as MIT. Can you please make that part explicit?

Thanks!
Kwang

yfcc-sift-2000.train.hdf5 is too large

the first time i extract feature and run yfcc.py,the file just is 5.1G,but the second time i extract feature and run yfcc.py,the file is 32.7G,so which is right?

error while loading calibration file

when i run yfcc.py,there is a error. how to fix it?

`../raw_data/yfcc100m//buckingham_palace/test
dump dir ../data_dump//buckingham_palace/sift-2000/test
Error while loading ../raw_data/yfcc100m//buckingham_palace/test/calibration/calibration_000001.h5
Traceback (most recent call last):
File "/media/data/wzy/OANet/dump_match/utils.py", line 98, in loadh5
raise e
File "/media/data/wzy/OANet/dump_match/utils.py", line 95, in loadh5
dict_from_file = readh5(h5file)
File "/media/data/wzy/OANet/dump_match/utils.py", line 111, in readh5
dict_from_file[_key] = h5node[_key].value
AttributeError: 'Dataset' object has no attribute 'value'

Process finished with exit code 1`

if I want to train my own dataset

thanks ahead. I have read your preprocessing code on the yfcc dataset, but I do not understand the process of it. Now If I want to do the feature matching on my own dataset, what steps should I do?

Question about “epoch”

Hi,
thanks for sharing the code.

Why use iteration instead of epoch when training?

Look forward to your reply.

Regards

How to calculate mAP via the results?

Excuse me, I have run the code and get result saved as shown in the image.
AAD5167F-4203-4037-8053-AEEFDE7B6424
I was wondering how to calculate mAP in the paper via these AUC results.

Thank you,

The accuracy on YFCC with GL3D model

Hi, I have a question about the GL3D model. When I test it on the YFCC test set, the results are very bad:
test result [0.09993750000000001, 0.029445680575755823, 0.5177923955358564, 0.9643927037450776, 0.446788042617176, 0.6026962917102501, 0.46986473430161074]

Is it something wrong with the test setting? And what hyperparameters do you use for training the GL3D model? I cannot see any codes in the repo.

Thank you very much.

SUN3D test data

Hello, i can't find the test data scene in raw SUN3D data, due to i need find their depth data. Could you tell me how did you generate these test data.

About side

Thank you for a great project. As I dig deeper into your code, I see that the run function of the NNMatcher(Learnedmatcher.py) function returns corr, sides, corr_Idx. Corr and corr_Idx are the corresponding match point and the index of descriptor respectively, but what does sides mean? Distribution of inside and outside points? Hope you can help answer, thank you

Error 400 when downloading the data

Hi,
I have successfully downloaded the model however I have a ERROR 400: Bad Request when I try to download the data with wget https://research.altizure.com/data/oanet_data/data_dump.tar.gz

Cheers,
François Darmon

Application on 3D reconstructions

Hello @zjhthu ,

I really appreciate your great work, but I wanted to know how the 3D construction results are attained (using the output of OANet)

Thanks in advance!

About the calculation of the distance matrix in class "NNMatcher"

I understand that variables d1 and d2 are the sum of squares of each descriptor
But I don't know why the distance matrix in NNMatcher.run is calculated like this

d1, d2 = (desc1**2).sum(1), (desc2**2).sum(1)
distmat = (d1.unsqueeze(1) + d2.unsqueeze(0) - 2*torch.matmul(desc1, desc2.transpose(0,1))).sqrt()

Especially the part - 2*torch.matmul(desc1, desc2.transpose(0,1)) is confusing me

Anybody knows how it works?

YFCC100M depth Data

hi,
thanks for your great job. Recently, i want to use the dataset of YFCC100M, but i don't find the depth data. while depth.txt exists. Can you tell me how to get the all data

best.

Question about “best_model”

Hi,
thanks for sharing the code.

I found that on the dataset yfcc, the best model for the estimation of the fundamental matrix is in sift-side-8k directory. Does this mean that both use_ratio and use_mutual are used?

Look forward to your reply.

Regards

A question about the computation of loss

Hi Jiahui, I'm trying to reproduce the results on YFCC.
I have a question about the computation of loss.
I find that the essential loss is used after 20k steps and only those essential loss less than 0.1 is used in the backward. (https://github.com/zjhthu/OANet/blob/master/core/loss.py#L49 and https://github.com/zjhthu/OANet/blob/master/core/loss.py#L84) I am wondering what is the motivation behind this implementation and what will happen if we use all essential losses all the time.
Thank you for the excellent work.

A question about the evaluation metric

Hi,
I have a minor question about the evaluation metrics. In the paper, the standard metric is mAP under 5 degree. But I find that the computation of mAP uses an interval of 5 degree (

ths = np.arange(7) * 5
), so that the actual metric is the precision of pose under 5 degree. I don't know whether I have missed any part of this code.

Thank you!

skip because nan

Will the gradient explosion affect the training results? Do you also encounter during training?

about mAP

Hi, thanks for you perject. I want to get the mAP on my data,what should I do? thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.