Giter VIP home page Giter VIP logo

gril-calib's Introduction

GRIL-Calib

Official implementation of our paper "GRIL-Calib: Targetless Ground Robot IMU-LiDAR Extrinsic Calibration Method using Ground Plane Motion Constraints".

About GRIL-Calib

  • GRIL-Calib is the LiDAR-IMU calibration method for ground robots.
  • Using only planar motion, the 6-DOF calibration parameter could be estimated.

Prerequisites

Set up your environment easily with Docker! 🐳

Requires Docker and the NVIDIA Container Toolkit installed.

1. Enter the /docker folder and make a docker image.

git clone https://github.com/Taeyoung96/GRIL-Calib.git
cd GRIL-Calib/docker
docker build -t gril-calib .

When you have finished it, use the command docker images and you can see the output below.

REPOSITORY                   TAG                   IMAGE ID         CREATED          SIZE
gril-calib                   latest                9f90339349a0     5 months ago     3.78GB

2. Make docker container (same path as above)

In /docker,

sudo chmod -R 777 container_run.sh
./container_run.sh <container_name> <image_name:tag>

⚠️ You should change {container_name}, {docker image} to suit your environment.

./container_run.sh gril-calib-container gril-calib:latest 

If you have successfully created the docker container, the terminal output will be similar to the below.

================Gril-Calib Docker Env Ready================
root@taeyoung-cilab:/root/catkin_ws#

3. Build and run GRIL-Calib

Inside the docker container, build and run the package.

catkin_make
source devel/setup.bash

Run with a public dataset

The launch files for M2DGR, HILTI, and S3E, as experimented with in the paper, are shown below.

  • For M2DGR,
roslaunch gril_calib m2dgr_xxxx.launch
  • For HILTI,
roslaunch gril_calib hilti_xxxx.launch
  • For S3E,
roslaunch gril_calib s3e_xxxx.launch

After running the launch file, you simply run the bag file for each sequence.

Run with your custom data

⚠️ This version only supports Spinning LiDAR (Velodyne, Ouster), not Solid-state LiDAR.

The reason for this is that the LiDAR ground segmentation algorithm has only been tested on Spinning LiDAR.
If we could achieve ground segmentation, we could theoretically do it for a Solid-state LiDAR like Livox Avia.

  • Make sure to acquire your data on an area with flat ground.
  • It would be helpful to collect data as the ground robot draws an "8".
  • Please make sure the unit of your input angular velocity is rad/s.

Important parameters

Similar to LI-Init, edit config/xxx.yaml to set the below parameters:

  • lid_topic: Topic name of LiDAR point cloud.
  • imu_topic: Topic name of IMU measurements.
  • imu_sensor_height: Height from ground to IMU sensor (meter)
  • data_accum_length: A threshold to assess if the data is enough for calibration.
  • x_accumulate: Parameter that determines how much the x-axis rotates (Assuming the x-axis is front)
  • y_accumulate: Parameter that determines how much the y-axis rotates (Assuming the y-axis is left)
  • z_accumulate: Parameter that determines how much the z-axis rotates (Assuming the z-axis is up)
  • gyro_factor, acc_factor, ground_factor: Weight for each residual
  • set_boundary: When performing nonlinear optimization, set the bound based on the initial value. (only translation vector)
  • bound_th: Set the threshold for the bound. (meter) ⭐️ See the ceres-solver documentation for more information.

Acknowledgments

Thanks to hku-mars/LiDAR_IMU_Init for sharing their awesome work!
We also thanks to url-kaist/patchwork-plusplus-ros for sharing LiDAR ground segmentation algorithm.

Citation

If you find our paper useful in your research, please cite us using the following entry:

@ARTICLE{10506583,
  author={Kim, TaeYoung and Pak, Gyuhyeon and Kim, Euntai},
  journal={IEEE Robotics and Automation Letters}, 
  title={GRIL-Calib: Targetless Ground Robot IMU-LiDAR Extrinsic Calibration Method Using Ground Plane Motion Constraints}, 
  year={2024},
  volume={9},
  number={6},
  pages={5409-5416},
  keywords={Calibration;Laser radar;Robot sensing systems;Robots;Optimization;Odometry;Vectors;Calibration and identification;sensor fusion},
  doi={10.1109/LRA.2024.3392081}}

gril-calib's People

Contributors

taeyoung96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gril-calib's Issues

how to collect dataset better

What a fantastic project! and we conduct testing as soon as the code is released.

Currently, I'm trying to use the Livox Mid-360 for testing. We haven't specifically collected datasets for calibration yet, but we're working with existing datasets. The convergence tends to be good in scenarios with more vigorous movements, but it's challenging in scenarios with slower speeds (less frequent turns). During data collection, do you have any suggestions for the vehicle's speed and the environment? Thanks :)

Data Collection Suggestions

Hi

Would it be possible to provide some suggestions for data collection in the readme? E.g. "Rotate in all axes; make figure 8 trajectories" etc.

x,y,z_accumulate and Impact on accuracy

Hello,
Thanks for posting this nice method!
I have two questions:

  1. In the config fiel, we should give the x,y,z_accumulate:

x_accumulate: Parameter that determines how much the x-axis rotates (Assuming the x-axis is front)
y_accumulate: Parameter that determines how much the y-axis rotates (Assuming the y-axis is left)
z_accumulate: Parameter that determines how much the z-axis rotates (Assuming the z-axis is up)

SO, If my y-axis is positiv to the right, should i fill in 0.999?

2. Are there any time limit and path requirements (like how lang should the path of this figure 8 shape be?) for data collection? I used same robot with same speed, but output parameters are very different. See below, data1 and data2 were collected in same place:

data1:
LiDAR-IMU calibration result:
Rotation LiDAR to IMU (degree) = -3.664754 0.884811 -7.481670
Translation LiDAR to IMU (meter) = -0.445991 -0.568093 -0.098269
Time Lag IMU to LiDAR (second) = -2.458705
Bias of Gyroscope (rad/s) = -0.010000 0.010000 0.010000
Bias of Accelerometer (meters/s^2) = -0.008860 0.010584 -0.010463

Homogeneous Transformation Matrix from LiDAR frmae L to IMU frame I:
0.991370 0.128955 0.023600 -0.445991
-0.130184 0.989589 0.061363 -0.568093
-0.015441 -0.063906 0.997836 -0.098269
0.000000 0.000000 0.000000 1.000000

data2:
LiDAR-IMU calibration result:
Rotation LiDAR to IMU (degree) = -0.514068 -0.272551 10.003449
Translation LiDAR to IMU (meter) = 0.279931 -1.056407 -0.092049
Time Lag IMU to LiDAR (second) = 0.659137
Bias of Gyroscope (rad/s) = -0.010000 0.010000 0.002501
Bias of Accelerometer (meters/s^2) = -0.008049 -0.011665 -0.009957

Homogeneous Transformation Matrix from LiDAR frmae L to IMU frame I:
0.984788 -0.173646 -0.006242 0.279931
0.173693 0.984767 0.008009 -1.056407
0.004757 -0.008971 0.999948 -0.092049
0.000000 0.000000 0.000000 1.000000

Ground Constraints

Hello,
thank you for open-sourcing this work.
I have a question: Does the ground constraint in this code really work in IESKF update? Does thousands of dimensional point-to-plane residual observations with three ground constraints really have a different impact on state updates?

The estimated height fluctuates frequently

Hi @Taeyoung96 ,

Thanks for your great work!
When I test the dataset we collected and find that the estimated height fluctuates frequently and is quite different from the value we measured manually.

I try to visualize the "/patchworkpp/ground" and find that it not only contains the ground points but also the points on the ceiling, as shown the image bellow:
xFIGFiaCVe
Nhz9kfPyEk

For the parameter for the patchworkpp in the config file , I just change the "sensor_height" and do not make any other changes. Do I need to modify anything else? The height of the imu is approximately 0.22m and the height of the lidar is 0.075m. Looking forward to your reply. Thanks

about source codes

Hey TaeYoung,
It is wonderful work, I would like to test/verify the method with your implementation, so will you share the source codes?

thanks,

-Deliang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.