Giter VIP home page Giter VIP logo

vicon2gt's Introduction

vicon2gt

This utility was created to generate groundtruth trajectories using a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems. Specifically we want to calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth trajectory similar to those provided by the EurocMav datasets. Please take a look at the REPORT for specific details and a more formal description.

You will need to have a bag dataset that has the IMU, camera, and motion capture measurements of either geometry_msgs::TransformStamped, geometry_msgs::PoseStamped, or nav_msgs::Odometry. If you are using the odometry topic it will use the provided covariance, otherwise it will use the one specified in the launch file. To run please take a look at the example launch files and try them out before testing on your own dataset. ros

Example Outputs

trajectory optimized

state_0: 
m_time:[1413394882.43529]'
q:[0.0251614, -0.79553, 0.0362641, 0.604304]'
bg:[-0.00190837, 0.0243624, 0.0807357]'
v:[-0.00342677, 0.00767059, 0.0012364]'
ba:[-0.012316, 0.0955729, 0.0272365]'
p:[-1.05904, 0.429683, 1.33247]'

state_N: 
m_time:[1413394997.13767]'
q:[-0.493136, -0.636743, -0.359694, 0.471165]'
bg:[-0.00152427, 0.0255527, 0.0799344]'
v:[-0.00181216, 0.00289851, -0.00109534]'
ba:[-0.00140464, 0.0198116, 0.0590847]'
p:[-3.02558, -0.526505, 0.958149]'

R_BtoI: 
 0.301581  0.017225  0.953285
0.0239218 -0.999659  0.010495
  0.95314 0.0196392 -0.301891

p_BinI: 
 0.0789319
-0.0253601
 -0.118787

R_GtoV: 
    0.999968            0   0.00803607
-7.00472e-05     0.999962   0.00871631
 -0.00803577  -0.00871659      0.99993

t_off_vicon_to_imu: 
0.197127

Frequently Asked Questions

  1. Help, doesn't converge to a good trajectory! -- This can be caused by many things and is hard to debug. The first thing we recommend doing is to look at the optimized trajectory in RVIZ and see if the alignment there is ok. If there is issues with a noisy poses you will see it in RVIZ. In this case it is likely you will not be able to optimize the trajectory. Another source might be poor noise values, try to play with both the IMU noises and vicon pose noises. If the dataset is degenerate, you might want to try fixing the marker to IMU transform (see launch files) as if there is not enough motion this can bias the trajectory results.

  2. Do I need an initial guess of the marker body to IMU? -- We found that on good datasets you can just set these to be an identity transformation. If you know these values you can also try fixing and not estimating them online. Additionally if there is enough rotation in the dataset the time offset usually robustly converges.

  3. The calibration seems to not converge to the same thing on different collected datasets, why? -- We don't expect the calibration to converge to the same thing on each dataset as this is highly depends on the motion of the trajectory. If you worry about it you can try fixing it and not optimizing it, but in general even if the calibration is different the trajectory itself should still be of high quality.

  4. Do I need a camera topic? -- No, but you need some topic to specify what timestamps you want the optimized states to be at. We recommend you specify anything lower than the IMU frequency, thus you could specify the vicon topic, or another topic.

  5. Explain the timestamps the groundtruth file has in it -- We have two time systems in the project: vicon and inertial. The states we estimate in the optimization problem are in the IMU clock frame and the CSV file we save has timestamps in the IMU clock frame. We use an arbitrary topic timestamps to define what timestamps we will export, but all these times are still in the IMU clock frame. E.g. if we use a camera topic, the CSV will have the poses at the IMU clock time of the timestamps in this topic (i.e. if you wish to get the pose at the camera timestamp you will have an additional imu-to-camera time offset you need to worry about and is not taken into account here).

  6. What frame of reference is the groundtruth file in? -- The saved trajectory is in the gravity aligned frame of reference with its origin located at the vicon frame and yaw set to zero. We estimate the roll pitch rotation from the vicon frame to the gravity aligned frame, and save all groundtruth orientations rotated into this gravity frame. Thus one should use either a position + yaw or SE(3) alignment method.

Building GTSAM and Dependencies

GTSAM is the only package you need to build the others (boost, eigen3) typically come on a linux-based OS. You can optionally install Intel TBB installed as this will allow for GTSAM multithreaded performance (link). You don't need this, but this will allow for faster optimization.

wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
rm GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo sh -c 'echo deb https://apt.repos.intel.com/tbb all main > /etc/apt/sources.list.d/intel-tbb.list'
sudo apt-get update
sudo apt-get install intel-tbb-2020.3-912

Then we can build GTSAM as normal and install it globally on our system. Note that we use the system Eigen since we want to link with packages which also use that Eigen version.

sudo apt install libboost-all-dev libeigen3-dev libmetis-dev
git clone https://github.com/borglab/gtsam
cd gtsam
git checkout 4.2a5 # supported by 4.2a5 version of GTSAM
mkdir build/
cd build/
cmake -DCMAKE_BUILD_TYPE=Release -DGTSAM_USE_SYSTEM_EIGEN=ON ..
make -j6
sudo make -j6 install
echo 'export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH' >> ~/.bashrc

Credit / Licensing

This code was written by the Robot Perception and Navigation Group (RPNG) at the University of Delaware. If you have any issues with the code please open an issue on our github page with relevant implementation details and references. For researchers that have leveraged or compared to this work, please cite the following:

@TechReport{Geneva2020TRVICON2GT,
  Title = {vicon2gt: Derivations and Analysis},
  Author = {Patrick Geneva and Guoquan Huang},
  Number = {RPNG-2020-VICON2GT},
  Institution = {University of Delaware},
  Note = {Available: \url{http://udel.edu/~ghuang/papers/tr_vicon2gt.pdf}},
  Year = {2020},
}

The codebase is licensed under the GNU General Public License v3 (GPL-3).

vicon2gt's People

Contributors

goldbattle avatar jessebloecker avatar woosiklee2510 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vicon2gt's Issues

Did not get the same EuRoC dataset trajectory error

Hi there,

I am trying to match the EuRoC dataset trajectory error in your report before I use it for my own dataset. However, I am not getting values close to your results in the report. Below is how I used this tool:

  1. In EuRoC mav folder, convert imu0 and vicon0 folder into a single rosbag
  2. Change exp_eth.launch file (I only changed "dataset", "folder" and "state_freq"=200Hz ) and roslaunch exp_eth.launch
  3. SE3 aligned the trajectory from exp_eth.launch and trajectory in EuRoC state_groundtruth_estimate folder but the trajectory error is much bigger than your values in report (about 10x bigger).

Could you please let me know where could possibly be wrong in the process?

Unable to run properly on TUM-VI dataset

Hello,

I've encountered an issue while using the exp_tum.launch to run the TUM-VI dataset, such as the room scene. The nonlinear optimization does not seem to run correctly. (When I use exp_eth.launch to run the EuRoC dataset, it runs without any problems.) The log information related to this issue is as follows:

[ INFO] [1705805883.917849622]: [VICON-GRAPH]: begin optimization
Warning:  Levenberg-Marquardt giving up because cannot decrease error with maximum lambda
converged
errorThreshold: 2.70976e+10 <? 0
absoluteDecrease: 0 <? 1e-30
relativeDecrease: 0 <? 1e-30
iterations: 0 >? 30
[ INFO] [1705805887.771673091]: [VICON-GRAPH]: done optimization (0 iterations)!
[ INFO] [1705805887.779034128]: [TIME]: 1.0088 to build
[ INFO] [1705805887.779087709]: [TIME]: 4.0854 to optimize
[ INFO] [1705805887.779097869]: [TIME]: 5.0941 total (loop 0)
[ INFO] [1705805887.779122074]: Publishing: /vicon2gt/optimized
[ INFO] [1705805887.779137393]: Publishing: /vicon2gt/vicon

The L-M algorithm ends after 0 iteration, and the generated trajectory is also an incorrect result.

I would greatly appreciate it if you could provide some insight into why this might be happening. Could it be an error in my usage of the tool?

GTSAM version for compiling

What is the GTSAM version for compiling this repo? I've tried the latest version of GTSAM and 4.1.0, but neither works. I failed to compile this repo successfully. Finally, I found that it can be compiled with GTSAM 4.2a5. It will be appreciated if you can specify the GTSAM version that is compatible with this repo. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.