Giter VIP home page Giter VIP logo

Comments (8)

othlu avatar othlu commented on August 19, 2024

Strictly speaking, In the calibration case it is not of importance if the timestamps are wrt. to the start, end of frame, or any other (fixed!) line within the frame. This is only valid as long as the intra-frame line scanning frequency is constant - a core assumption of our method.
We indeed aligned the timestamps between our groundtruth and the camera to perform the trajectory analysis with a constant offset. The offset takes into account any potential offset wrt. the first line exposure, but also offsets to the groundtruth timestamps.
Luc

from kalibr.

JzHuai0108 avatar JzHuai0108 commented on August 19, 2024

Thanks for your reply. Totally agree.

BTW, current implementation of Kalibr requires some pattern like chessboard, aprilgrid. We are supposed to print out one pattern and attach it to a flat board. This can be cumbersome because the board may not be perfectly flat and the printed pattern may be blurred. For comparison, Agisoft Lens (http://downloads.agisoft.ru/lens/doc/en/lens.pdf) does GS camera calibration with a pattern displayed on the computer screen. I believe a pattern on the screen has less issues and saves much effort.

Just curious, have you guys looked into using a pattern on a screen for calibration?

from kalibr.

rehderj avatar rehderj commented on August 19, 2024

So far, we have not looked into displaying patterns rather than printing them. I can see how displaying has advantages in flatness and potentially in accuracy, while exhibiting drawbacks in terms of maximally achievable pattern size.
So feel free to try displaying the pattern and please let us know what precisions you could achieve.

Other than that, is your issue with correct timestamping in rolling shutter calibration resolved?

from kalibr.

JzHuai0108 avatar JzHuai0108 commented on August 19, 2024

You are right.

Once I finish the work, I will inform you the accuracy of calibration with patterns on screen.

For the rolling shutter calibration, according to your reply, we can simply use the timestamps of frames in the video as $\bar{t}$. I assume these timestamps should be recorded with respect to a fixed line in the camera sensor.

from kalibr.

rehderj avatar rehderj commented on August 19, 2024

I think you are fine with using the timestamps that your system provides as long as they do not exhibit excessive jitter.
Frankly, I did not fully grasp your question: As long as you use measurements exclusively from a single sensor, there will be only a single "time frame". Accordingly, with respect to what "base time frame" would you estimate the fixed offset? It should be entirely unobservable from your data whether you recorded a sequence a second ago or yesterday.
Or did I misunderstand your question?

from kalibr.

JzHuai0108 avatar JzHuai0108 commented on August 19, 2024

Thanks for your confirmation.

Yes. From one sensor, there is only one reference "time frame", and it is unobservable with data of a single sensor. So it does not make sense to estimate a fixed offset w.r.t to such an unobservable "time frame". Sorry about the misleading question.

Although I don't see there is no Wiki page for calibrating rolling shutter cameras with the kalibr_calibrate_rs_cameras procedure in Kalibr, I tried it out and some errors persisted. I believe kalibr_calibrate_rs_cameras is in developing stage, so the following test results may serve the purpose of debugging errors.

Here are the steps I did the experiments. First I printed april grid 6x6, tag size 0.088mm, tag size ratio 0.3, and attached it to a flat board. Then I used iphone6s to take 5 video clips of the april grid. Before capturing these videos, the AF/AE (autofocus/auto exposure) of the rear camera was fixed, And the board was leaned against a wall, with the x axis on the april grid pointing to the right, and its y-axis pointing up. In capturing each video clip, the camera of the phone was rotated (not too fast not too slow in my feel) and moved back and forth while trying to keep the entire pattern within in the camera view. Each video has a frame rate of 30 Hz(29.97 to be exact) and a length around 40 secs. The frame rate was not reduced to 4Hz as required in kalibr_calibrate_cameras, as I believe continuous data is necessary for rolling shutter calibration.

Then I extracted the images of each video, and named them with their recorded timestamps in the video. Images of the first 2 seconds was deleted because I suspect images with timestamps less than 1 second caused a reading error in kalibr_bagcreater, Next, I used kalibr_bagcreater to bundle these images into a bag file.

At last, I run kalibr_calibrate_rs_cameras routine, the following error occurred for the images in the first video clip.
$ kalibr_calibrate_rs_cameras --model pinhole-radtan-rs --target rs_kalibr_data/april_6x6.yaml --bag rs_kalibr_data/rsiphone6s.bag --topic /cam0/image_raw --inverse-feature-variance 1 --frame-rate 30
importing libraries
Dataset: rs_kalibr_data/rsiphone6s.bag
Topic: /cam0/image_raw
Number of images: 1101
Extracting calibration target corners
Extracted corners for 488 images (of 1101 images)

Initializing a pose spline with 385 knots (100.000000 knots per second over 38.469767 seconds)
[ERROR] [1473722980.055452]: Exception: list index out of range

For the second video clip, the following error occurred.
$kalibr_calibrate_rs_cameras --model pinhole-radtan-rs --target rs_kalibr_data/april_6x6.yaml --bag rs_kalibr_data/rsiphone6s.bag --topic /cam1/image_raw --inverse-feature-variance 1 --frame-rate 30
importing libraries
Dataset: rs_kalibr_data/rsiphone6s.bag
Topic: /cam1/image_raw
Number of images: 1076
Extracting calibration target corners
Extracted corners for 605 images (of 1076 images)
[ERROR] [1473722655.986459]: Could not generate initial guess.
[ERROR] [1473722656.805938]: Exception: Nans in curve values

Out of curiosity, I run kalibr_calibrate_cameras on the images of the first video clip. I know this is absurd because the video were not captured by keeping the camera static and the frame rate was very high (30Hz). But suprisingly, it gives fair result which means the bag file is OK.

Result of kalibr_calibrate_cameras:
cam0 (/cam0/image_raw):
type: <class 'aslam_cv.libaslam_cv_python.DistortedPinholeCameraGeometry'>
distortion: [ 0.08108148 -0.03071386 0.00744229 0.00217337] +- [ 0.00354973 0.01217737 0.00067944 0.00066682]
projection: [ 1993.03642441 1992.20524789 972.09515462 584.96805969] +- [ 6.46044898 6.10336179 3.4411776 4.9814741 ]
reprojection error: [-0.000004, 0.000002] +- [0.945791, 1.032050]

For comparison, the expected values for projection are [1889, 1889, 960, 540].

I look forward to the wiki page of Kalibr on kalibr_calibrate_rs_cameras.

from kalibr.

rehderj avatar rehderj commented on August 19, 2024

You are right that some documentation is still missing on rolling shutter calibration and that the addition is fairly new, so it hasn't been tested thoroughly by the community. However, as a single data point, I tested it on one camera with two different line delay settings and it worked perfectly out of the box, so I think Luc did a great job there.
The main aspect that is different in rolling shutter calibration as compared to global shutter calibration is that you try to obtain an estimate of the complete trajectory during calibration rather than just estimating a (temporally not even necessarily ordered) set of discrete camera poses.
This is hard, since for the problem to be well posed, you will need to make some assumptions about this motion, such as maximum accelerations which yield regularization terms. In your dataset, the target was only detected in about half of the frames, leaving observation gaps that will have to be bridged entirely by these regularizations (and some clever knot picking). This might fail.
Could you please repeat the calibration, with the highest possible frame rate and either with a checkerboard target or a dot pattern, making sure that you get detections in the majority of frames? Also, I am not sure which settings cause the iPhone to maintain a fixed line delay over a complete dataset, but you will want to find these for the calibration to succeed. Finally, in some cases you can control exposure and line delay separately. In that case, a short exposure is beneficial, since it allows for a better localization of the interest points.

from kalibr.

JzHuai0108 avatar JzHuai0108 commented on August 19, 2024

I repeated the test with a 5ms exposure time and 30 FPS. This time the corners could be found in most of the images. Still occasionally the "Exception in thread block: Time is out of dual 32-bit range
[ERROR] [1565017119.534108]: Exception: std::exception" cropped up. At times when no errors appeared, the kalibr_rs module took so long. My feel was that it took at least one hour for a 90 sec image sequence. Another error might show up at the end of the long haul, "CHOLMOD error: all methods failed". So I switched to the pull request branch of #261 and run kalibr_rs on sessions of data collected by several smartphones, while the estimated intrinsic parameters were often exorbitant, the estimated LineDelay's sometimes were not so close to the nominal values, but many times were.

from kalibr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.