Giter VIP home page Giter VIP logo

Comments (10)

johannes-graeter avatar johannes-graeter commented on July 30, 2024 5

Hi Claud,

thanks for investing your time :)
yes I tested it intensively while finalizing my PhD.
I used two autonomous driving platforms:
One with cameras and a Velodyne HDL64, using LIMO with 1 of the cameras and the velodyne and I didn't use semantics.
The other setup was without LIDAR but with a stereo camera setup and semantics.
Here the depth extraction from LIDAR was replaced by depth from stereo.
The results on the first platform where of similar quality to KITTI and computing in real time, the stereo system was of lower quality but worked well.

But it was both driving platforms so the setup was kind of similar to KITTI.
I am not sure how the system would perform on different robotic platforms though.
The key changer here is the depth extraction of the tracklets (https://github.com/johannes-graeter/mono_lidar_depth/tree/master/monolidar_fusion). In the current code this the depth extraction relies on the dense 64 layer Velodyne and does depth interpolation for the camera tracklets with the PCL.

However if you only possess 16 layers that interpolation is not accurate any more and the heuristics implemented will fail. However I designed LIMO specifically so that modules are interchangeable, so infact what you can do (and what I did for stereo) is rewrite the depth extraction node so that it can treat 16 layers, which would be a great contribution.
If you have a GPU on your system I personally would go for extracting dense flow (with https://github.com/lmb-freiburg/flownet2 or https://github.com/simonmeister/UnFlow or perhaps there is new ones out now :) ) and track the reprojected lidar points in the images for as many frames as possible. You could convert these in tracklet_depth_messages (as in this repo) and simply feed them into LIMO.

I have very high hopes for this approach, my few first tries with using machine learning based flow extraction for SLAM looked very promising :) Now I unfortunately do have only little spare time...

If you are interested we could share some ideas via mail or set up a skype meeting if you want :)

Regards,

Johannes

from limo.

johannes-graeter avatar johannes-graeter commented on July 30, 2024

Hi there,

thank you for your work to make limo spin on your platform! The assertion suggests, that your camera camera model is not symmetric (focal length in x == focal length in y), which it extracts from the intrinsic matrix of your camera. Did you do an undistort of your images to a pinhole camera model? If so could you compile a different model, that has the same focal length in x and y direction?
It could also be changed in code, but I think it is cleaner to undistort your images to a pinhole model, that is symmetric.

Regards,

Johannes

from limo.

Claud1234 avatar Claud1234 commented on July 30, 2024

Hi:

I did rectification of my gray scale image as you suggested, I also changed the values in camera_info to the new K and P matrices, now K and P have the same fx and fy.

the problem is there is still no following log message after get the transformation in the terminal, exact same as before (second terminal screenshot I post last time).

Now i am thinking about topic /tf and /tf_static.
I noticed in your bag file, /tf is /local_cs(ground truth) to /sensor/camera.
/tf_static is /sensor/camera to /sensor/velodyne and /sensor/camera to four images(gay left, gray right, color left, color right).

Q1: Since Limo only called the gray left, can i say the transforms in /tf_static betwen '/sensor/camera' to other three images are not used during the calculation of Limo?

Q2: Is this /tf_static important for Limo, because In my own bag file, I set my /tf is transform between /sensor/velodyne and camera directly and my /tf_static is something not related. Is this the reason I get nothing after get the transformation(like the second terminal screenshots i post last time). Must I set my /tf and /tf_static as your bag files?

Really thanks of any help

Regards

from limo.

Claud1234 avatar Claud1234 commented on July 30, 2024

Hi:

Good new is you can forget my previous comment, because i have made the limo running for my own data, even though there is no calculation from limo(I will explain later), but at least the limo can 'running' without errors.

Regardless the performance or calculate speed, etc. I found there is only need four topics that can make limo working.(I tried with the template 04.bag file you provided) .
image

I set my own bag file like this as well, because i do not need handle the complex /tf in this way, just need give the absolute location and orientation between the lidar and camera in /tf_static.

After these operations, the limo capable to 'running', but nothing log out, everything are 0!!!
image

Another issue is there is always a long delays between the calculation stuffs which i showed in above picture
image

In your template bag file, this usually only appear once a time then start new calculation.

These are the new problems I caught right now.
For the second delay problem, if it only about the optimization speed but not affect calculation, let's just ignore it right now.

The key problem is why all log results are zero?

Here are the situation of my bag file.

At first, when running my bag file, topic /tf_static, /image, /velodyne are not start at the same moment in bag file, each of them has a 0.5s delay to each other.(/tf_static start around 0.5s, /image around 0.9s and /velodyne around 1.4s).

Second, the /image and /velodyne are not synchronous in time stamps. Their nano seconds time stamps are different and also rolling in different intervals. Do you think this is the reason why outcome are all zero? Is it compulsory that synchronous the time stamps precise to nano seconds?
Here is info of my bag file
image

Thanks for any advises.

Regards

from limo.

johannes-graeter avatar johannes-graeter commented on July 30, 2024

Hi Claud,

sorry for the late repoly. I just recently changed my employer so some things got lost on the way...

First of all great that you made it run even htough no results come out yet :)
First problem runtime:
That feature matching and tracking takes too long (should be 1/10th of the time for images on kitti).
I can think of 2 reasons:
a) Your images are too big -> scale them down to 1 Megapixel
b) you build it not in release

Second problem all zeros:
Somehow no parameters are added to the SLAM problem as the ceres output suggests...
Hard to debug that from here, but if you can do that, it would help if you could send me a sample of the data (host it somewhere so I can download) so I can have a look at it...

from limo.

johannes-graeter avatar johannes-graeter commented on July 30, 2024

Hi Claud, did you try the suggestions and have some feedback ?

from limo.

baladeer avatar baladeer commented on July 30, 2024

Hi all, how do you disable the semantic

from limo.

johannes-graeter avatar johannes-graeter commented on July 30, 2024

See issues #16 and #30 and come back to me with questions :)

from limo.

Claud1234 avatar Claud1234 commented on July 30, 2024

Hi all, how do you disable the semantic

Actually in the perspective make LIMO 'running' with your own dataset, the sematic is not compulsory. The minimum data that input bag should contain are grayscale images, camera_info, tf_static and point_clouds. I found with this four topics the LIMO already capable to running, but this can not promise the result.

from limo.

Claud1234 avatar Claud1234 commented on July 30, 2024

Hi Claud, did you try the suggestions and have some feedback ?

Hi. Thanks for concerning at the first. I was busy in another project recent days so did not check this page frequently.

Actually all the problems in my last post have been solved. I think the images scale and build are Ok. The problem is the timestamps of all topics. The reason that I got 0 in output and long term delay is the time stamps of images, camera_info and point clouds are not synchronous, like this:
image
You can see none of topics are synchronous inside the bag.

I succeed to solve synchronization issue. New bag is like this:
image

Now I am able to get the effective result and there is no delay anymore.
In general, LIMO has a very strict requirement of the synchronization .

Even though I can get the result, the result still not as perfect as the Kitti. The environment of our dataset is much 'fiercer' than the kitti. Another point is that our velodyne is in VLP-16 but kitti is VLP-54, so our point cloud is not as dense as the Kitti's dataset.

At present, it is only able to get the effective results from a short period of time at the beginning, then it will lost tracking of keyframes, the Path results goes to crazy as well.

I do not think this problem is still because of the configuration anymore but more based on the data itself. As I said before, our dataset more like an 'industrial data' compared with the Kitti's. There are a lot of reasons can result to this problem, for example, the contents in images and velodyne are not pair to each other.

I am curious to know have you ever tried this LIMO to other practical datasets but not kitti?

from limo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.