Giter VIP home page Giter VIP logo

Comments (4)

Rookfighter avatar Rookfighter commented on June 12, 2024

Hi Smithangshu and thanks for using cv-eigen!

Just as a disclaimer: if I recall correctly I had some problems implementing the Horn-Schunck optical flow method, so the algorithm might still be incorrect. If you find any issues with Horn-Schunck please file an issue and I will fix it!

Now about your actual problem: I know there is functionality in opencv to track feature points using sparse optical flow (KLT). This tracking feature is not yet implemented in cv-eigen.
If you want to do tracking with dense optical flow you could employ some kind of voting system around your feature points. Depending on your FAST mode you could use the pixels in a region around your keypoint and let them "vote" in which direction the pixels moved in the next image.

However, I would strongly suggest you use feature descriptors to track feature points! in cve/feature you can find different "extractor" functors. These functors extract feature descriptors from keypoints. You can then match these descriptors between different images. You can also find an example on feature matching in examples/find_matches.cpp.

Note: For comparing feature descriptors as in the example you have to include another header-only library from me https://github.com/Rookfighter/knn-cpp
But you could also just copy the relevant code paths into your project.

Also ORB features are usually a better fit than FAST features (ORB is essentially a extended version of FAST).

I would suggest the following:

  • detect ORB features in image A
  • detect ORB features in image B
  • extract ORB descriptors in image A
  • extract ORB descriptors in image B
  • match descriptors to find matching keypoints using brute force

from nvision-cpp.

Smithangshu avatar Smithangshu commented on June 12, 2024

Hi Rookfighter,

I know the problem of Horn-Schunck optical flow in cv-eigen. It occurs when I supply image size of 640x480, it looks like some memory corruption issue. If I use 320x240 image problem goes away. The problem never ever come across. Still if you can fix it it would be great.

Now about tracking point in dense optical flow, for Sparse optical flow(ie. Lucas Kanade) in OpenCV its easy to track. Even for dense optical flow(ie. Gunner Fernback), I know how to track using following code.
vector<Point2f> currentPoints = vector<Point2f>(); for (unsigned int n = 0; n < prevPoints.size(); ++n) { float ix = floor(prevPoints[n].x); float iy = floor(prevPoints[n].y); float wx = prevPoints[n].x - ix; float wy = prevPoints[n].y - iy; float w00 = (1.f - wx) * (1.f - wy); float w10 = (1.f - wx) * wy; float w01 = wx * (1.f - wy); float w11 = wx * wy; if (prevPoints[n].x < flow.cols - 1 && prevPoints[n].y < flow.rows - 1) { currentPoints.push_back( prevPoints[n] + flow.ptr<cv::Point2f>(iy, ix)[0] * w00 + flow.ptr<cv::Point2f>(iy + 1, ix)[0] * w10 + flow.ptr<cv::Point2f>(iy, ix + 1)[0] * w01 + flow.ptr<cv::Point2f>(iy + 1, ix + 1)[0] * w11); } }

And it works fine. Same way I tried to track using Horn Schunck optical flow with following code.

float ix = floor(keyPoints(0, n)); float iy = floor(keyPoints(1, n)); float wx = keyPoints(0, n) - ix; float wy = keyPoints(1, n) - iy; float w00 = (1.f - wx) * (1.f - wy); float w10 = (1.f - wx) * wy; float w01 = wx * (1.f - wy); float w11 = wx * wy; if (keyPoints(0, n)> 1 && keyPoints(1, n)>1 && keyPoints(0, n) < flowImg.dimension(1) - 1 && keyPoints(1, n) < flowImg.dimension(0) - 1) { keyPointsTracked(0, n) = keyPoints(0, n) + flowImg(iy, ix, 0) * w00 + flowImg(iy + 1, ix, 0) * w10 + flowImg(iy, ix + 1, 0) * w01 + flowImg(iy + 1, ix + 1, 0) * w11; keyPointsTracked(1, n) = keyPoints(1, n) + flowImg(iy, ix, 1) * w00 + flowImg(iy + 1, ix, 1) * w10 + flowImg(iy, ix + 1, 1) * w01 + flowImg(iy + 1, ix + 1, 1) * w11; } else { keyPointsTracked(0, n) = 0; keyPointsTracked(1, n) = 0; }

But sadly it does not work.

Also, I tried to track point using Lucas Kanade of cv-eigen in same way but it does not work. Can you please suggest me how can I use it.

Now let's come to ORB, in general problem with ORB is its for detection not for tracking. It sometime suffers with lack of accuracy at the same time brute force is little slower than tracking with optical flow. OF is fast at the same time more accurate.

Basically, my purpose is tracking a planar in wild. I have very limited points in planar. So Sparse OF (ie. OpenCV LK OF) fails in certain pose with challenging background. So I am thinking to use Dense OF but OpenCV GF OF is very slow so I was searching for alternative then I found cv-eigen, which looks to be providing very fast result especially LK and HS(for RB I am getting error, could not run it).

So can you please help me to track the sparse point with LK and HS with cv-eigen?

from nvision-cpp.

Rookfighter avatar Rookfighter commented on June 12, 2024

Alright, as far as I understood your approach: You calculate the the optical flow of your feature point using bilinear interpolation (as the feature point has subpixel accuracy). Then you basically move your previous point by this optical flow and assume that's the new position of the feature. So that sounds alright to me.

Now I did not fully get what you were testing and what worked for you and what not. So I understood that following scenario worked for you:

  • you used FAST from OpenCV + Gunner Fernback optical flow from OpenCV + your weighted pixel movement approach

And I understood that the following did not work for you

  • you used FAST from OpenCV + Horn Schunck optical flow from OpenCV + your weighted pixel movement approach
  • you used FAST from cv-eigen + Lucas Kanade optical flow from cv-eigen + your weighted pixel movement approach

I would suggest you post some sample images, so I would get a better idea of the kind of images you are using. In general not all optical flow algorithms work equally well on different scenarios.

Also, if possible post some data files (csv) with optical flow generated with Lucas Kanade from OpenCV and with Lucas Kanade from cv-eigen based on the same image pair. We can then get a better idea if your issue is related to a cv-eigen bug or a application domain problem.

from nvision-cpp.

Smithangshu avatar Smithangshu commented on June 12, 2024

Hi Rookfighter,

Basically, following approaches worked for me

  1. OpenCV Fast(or any Feature detector) + OpenCV Lukas Kanade Optical Flow
  2. OpenCV Fast(or any Feature detector) + OpenCV Gunnar Fernback Optical Flow + weighted pixel movement
    And following approaches did not work for me(partially)
  3. cv-eigen Fast + cv-eigen Horn Schunk Optical Flow + weighted pixel movement.
  4. cv-eigen Fast + cv-eigen Lucas Kanade Optical Flow + weighted pixel movement.

Note, I have mentioned "partially" above, because I found the direction of movements that I am getting are correct. But the problem is with the value, the magnitude of movement.

Basically, I am using webcam stream to find optical flow of detected points by Fast.

For example, I want to track my eye balls(2 points) on several next frames. Which I believe is possible by optical flow. Because I did it with OpenCV. All we need is to get the absolute pixel displacement of the points, from the previous frame to the current frame to calculate the updated position of those respective points in current frame. It should be very easy.

Certainly, I am sure that your implementation of Optical Flows work correctly. But I can not derive meaningful result from the functions, what is want to calculate is the following-

X' = X + dX;
Y' = Y + dY;

Where, I am tracking Point (X, Y) of the last frame.
(dX, dY) are the absolute pixel movements on X and Y axis respectively, of Point(X, Y) from the last frame to the current frame.
And the resulting tracked Point in the current image is (X', Y').

This is I think completely independent of the visual environment because I can see the color coded result from Optical Flow looks apparently correct to me. Only thing I need to get the absolute pixel displacement. May be that is normalized or something like that so that I could not able to get it.

You can try yourself, by open your webcam and detect a few point on screen to track using FAST(or you can manually select a few points to track). And keep track of the updated positions of the points in a few next consecutive frames. If you can successfully do that, then the the you will achieve it, that will be the exact solution what I am requesting for.

Precisely, tracking sparse points with cv-eigen dense optical flow.

Regards,
Smithangshu

from nvision-cpp.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.