Giter VIP home page Giter VIP logo

orb_slam2-documented's People

Contributors

alejandrosilvestri avatar leandro-bauret avatar raulmur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

orb_slam2-documented's Issues

The theory behind vector<size_t> Frame::GetFeaturesInArea(const float &x, const float &y, const float &r, const int minLevel, const int maxLevel) const

Dear @AlejandroSilvestri

Sir, an important part in reducing 3D point candidates in ORBmatcher::SearchByProjection() methods is to compute nearest keypoints corresponding to the 2D projection of each 3D Map Points before optimization.

My questions are the follows

  1. Are the undistorted keypoints arranged in a KD-tree?
  2. Is Frame::GetFeaturesinArea performing the same operation as https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query_ball_point.html#scipy.spatial.cKDTree.query_ball_point?
  3. Where can I look up the theory behind the Frame::GetFeaturesinArea() method? I tried a lot looking for its theory online but so far could not find anything concrete.

With thanks,
@Mechazo11

Enormous scale error on Mono_Kitty

Hi,

I'm trying to use ORB-SLAM2 to create a GPS alternative fro a moving car.

When running mono_kitty on KITTY08, I should get something kind of ok for the scale (without taking drifting in account, according to this paper -> https://arxiv.org/pdf/1610.06475v2.pdf )

But my scale is really off (the most west points are at -20). Does this mean that implementing ORB-SLAM2 on a monocular camera is not good anymore, compared to the first ORB-SLAM algorithm ?

keyFrameTrajectory and ground-truth data not overlapping

Dear @AlejandroSilvestri first of all, thank you very much for the documentation of ORB-SLAM2.
I have installed and build ORB-SLAM2 on my computer. I wanted to check the algorithm (monocular) with the EuRoC dataset. And I have used the V1_02_medium sequence from the EuRoC dataset. I ran it with the command specified by the author. The algorithm is running, I can see point cloud and keyframe position during the run. ORB-SLAM code creates keyFrameTrajectory.txt file that keeps the position and orientation information at the end of the run. I want to compare ORB-SLAM output and ground-truth data. But I couldn't draw the overlapping trajectory. I tried myself, I also tried EVO. None of them has worked. By the way, the author of EVO already says the code is not good at the EuRoC dataset it works better for the TUM dataset. How can I see the overlapping trajectories?
Yes, monocular camera doesn't have depth information. Before A. Davison we didn't see a monocular SLAM application if I am not wrong. But the camera is moving. So it can use two sequential images for triangulation. If monocular SLAM cannot recover exact position without manual scaling or drifting after trajectory completed, then how we talk about the success of monocular SLAM algorithms?
Could you please clarify me?

camera postion

hello may you kindly tell me how to get camera postion XYZ ABC i am going to align point cloud with camera postion

Feature's detection effect on initialization

Hello @AlejandroSilvestri
Thank you so much for your documented version of ORBSLAM, it was so helpful for me.
how ever I have some questions related to feature's detection effect on initialization time, I am still new to computer vision and I do not understand why FAST could detect some features in some frames and when re-running the same video they it could not? and what is their effect on initialization time of the algorithm? could you please provide any suggestion in the direction of enhancing the initialization time ?
Thanks in Advance

How is the current (optimized) pose concatenated with pose's from beginning?

Hello @AlejandroSilvestri,

Thank you very much for creating this repo where you documented ORB-SLAM2. It's been a great source of material for my project.

According to the classical tutorials of Visual SLAM for 2D-2D correspondence and 3D-3D correspondence, we had to concatenate current pose (T_k) with pose matrix from previous frame (C_j) using the formula Ci = C_j dot T_k where j = i - 1. Reference https://www.researchgate.net/publication/220556161_Visual_Odometry_Tutorial (pg 9)

I was wondering where exactly this operation is done in ORB-SLAM2? My understanding is when we perform Local Map optimization, this process is "implicitly" performed but just want to be sure.

With best,
@Mechazo11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.