Giter VIP home page Giter VIP logo

tfeat's People

Contributors

bitsun avatar ducha-aiki avatar edgarriba avatar vbalnt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfeat's Issues

how to evaluate the Liberty using Notredame

image

The test accuracy is always change. Did you get the average precision of the test data ?

I mean, If we train 10 epoch, we can get a precision. If we train 20 epoches, we can get another result. How did you solve this problem ?

two questions about ratio loss and training patch extraction

Hi, thank you for the great work first. May I ask two questions:

  1. Basically ratio loss is unsuitable (doesn't converge at all) for keypoint matching according to your paper, so the best choice is ranking with anchor swap. Am I correct?
  2. How are the training patches from Lowe's DoG keypoints extracted? Is it just a fixed-size window cropping or based on the keypoints' scale ( possibly orientation as well?) followed by a normalization step?

Thanks in advance.

Download files

Could you send the link to the input_video.webm and object_img.png files?

Because the software presents this message:
File "tfeat_demo.py", line 220
print "Not enough matches are found with TFEAT - %d/%d" % (len(good1), MIN_MATCH_COUNT)

Exact learning rate schedule

Paper says:

. For the optimization the Stochastic Gradient Descend [3] is used, and the training is done in batches
of 128 items, with a learning rate of 0.1 which is temporally annealed, momentum of 0.9
and weight decay of 10−6 . We also reduce the learning rate every epoch

Could you please clarify, which exactly factor is used for reducing learning rate schedule? And which epoch snapshot is the one used for getting results in paper?

Not clear steps for running tfear_demo.py

I am trying to run tfeat_demo.py 2 days already with no luck.
My configurations are Ubuntu 18.04 Cuda 9.0 Cudnn 7.05 Python 2.7 Torch 7
Please can you show details on all dependencies with versions and steps.
There is also a syntax error when running tfeat_demo.py and get_nets.sh not working.
My goal is to run demo in opencv as shown in youtube demo

Overfitted

When do the "UBC all" trained params will be ready? The benefit of the descriptors like SURF and SIFT is that they are independent of dataset and your descriptors are kinda overfitted over different UBC datasets. I'm getting better results with SIFT/SURF descriptors on most of my images and thus TFeat is not production ready. It seems currently the yosemite pretrained parameters are best, I would like to try "UBC all", meanwhile I will stick with SIFT descriptors.

Hint: For scientific success to win the battle against SIFT I recommend you to train on RGB patches instead of Grayscale, but seems it will be hard to get such dataset? It just requires to change 1 parameter from 1 to 3 in nn.Conv2d layer:

        self.features = nn.Sequential(
            nn.InstanceNorm2d(1, affine=False),
            nn.Conv2d(3, 32, kernel_size=7),
            nn.Tanh(),
       ...

is there any train tips~

these days,i have tried to train the network with 1281283 images with sgd and i generate 1.28M triplets.
however,it's so difficult to convergence.
can you give me some tips to train~

Results of pretrained models

Hello,

I evaluated the pretrained models that you provide in the repo but the results are not the same as in the paper. Are the pretrained models the same used in the paper?

The strange part is actually I get lower results (1-2 %)

Thank you

Test tfeat descriptor using SIFT keypoint

@vbalnt
Thank you for providing such wonderful work. You have provided a script to show the correspondence, it shows the TFeat descriptor is obviously better than BRISK. However, if I use SIFT to detect the keypoint, I found the performance is really worse.

Could you give me some advice on how to solve this problem ?

Evaluation of dataset

Dear vbalnt,

Thank you very much for open sourcing the code, it is very easy to read.
I would like to point out a certain thing. In the eval section, there needs to be an update of the offset ( offset += FLAGS.batch_size ).

Thank you again,
Best Regards,
ManyIds

Details in tfeat_demo.py

First of all, thanks for open-sourcing your code. I have two questions/issues regarding tfeat_demo.py:

  1. The extracted patches should be normalize to the ranges the network was trained on. In training, I belive one uses [0,1] range and subtracts mean (~0.48) and divides by stddev (~0.18). In testing, opencv works with [0,255] range and L75 just subtracts mean of each patch.

  2. ORB may be involuntarily disadvantaged by using improper matching (cv2.NORM_HAMMING is recommended over cv2.NORM_L2, which is useful for tfeat).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.