Giter VIP home page Giter VIP logo

dynamic-vins's People

Contributors

jianhengliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamic-vins's Issues

Some questions about the experiment

Excuse me.My question is as follows:

1.I run the office1-1 experiment, and use the evo tool with the -r full parameterthe to get ATE RMSE was 2.3, which was quite different from the results of the paper. I want to know whether ATE RMSE in Figure 6 is ATE obtained by considering both rotation and translation errors.In the evo tool, use the -r full parameter to test. Is that righ.And how can I improve to make the results close.

2.As shown in figure 6 each value in the upper left corner, is there a tool to test every scene average percentage of correct values?
Thank you!

vins_result_no_loop.scv无内容

作者您好,我在跑tum-rgbd-fr3-walking-halfsphere数据集时,运行结束后,在output文件输出的vins_result_no_loop.scv文件是空白内容,请问您能指点一下我是哪里可能有问题吗?我已经修改了tum的yaml文件:
d3d5bda605cdb7f77a0676bb923330d
da41e07252a055ecc09372f6565dc6e
81cee9c6fa624c6455094faa52e4993
feb9428dcbde6b53255109d0862f041

What if reset happen

Related to #6 .

May I ask how it is managed when a failure is detected and what would be the best way to evaluate it?
Because from my experiments each failure bring the robot to 0,0,0 and reinit everything. However, at that point it is difficult to evaluate the trajectory.

运行cafe1-1.bag-imu.bag包时卡顿

image
按照readme解压得到cafe1-1.bag并运行merge_imu_topics.py得到cafe1-1.bag-imu.bag,单独运行bag不卡顿,但是运行roslaunch vins_estimator openloris_vio_pytorch.launch后在运行bag直接卡住

Custom bag file

Hi there, we're running a custom bag file.
My guess is that we need to change the focal length on the parameters.h file.
Still we are not able to run with the IMU since we continuosly get ROS_INFO("Not enough features or parallax; Move device around");

Any tip on how we can solve it, apart for considering another bag? Any parameter we should look into given that we run at 240 Hz the IMU and have a different camera?

Thanks for your help

运行openloris数据集没有semantic_mask

使用的命令为:
roslaunch vins_estimator openloris_vio_pytorch.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play YOUR_PATH_TO_DATASET/market1-3.bag-imu.bag
Rviz中使用image选定话题/vins_estimator/semantic_mask,没有图像输出

The experimental results are quite different from the paper values

作者你好,我使用openloris-scene-tools评测 cafe 场景数据集时遇到了一些问题。在cafe1-1中,所测结果与论文值相似,但是cafe1-2的追踪正确率始终为0.574,与论文中的0.96相差较大。
实验结果图如下,使用evaluate.py对单独序列进行评估。
image
image

在Jetson xavier nx上运行时报错:Failed to load nodelet '/EstimatorNodelet` of type `vins_estimator/EstimatorNodelet` to manager `nodelet_manager'

报错信息如下:
[TensorRT] WARNING: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
bingding: data (3, 640, 640)
bingding: prob (6001, 1, 1)
batch size is 1
[FATAL] [1685328734.855312649]: Failed to load nodelet '/EstimatorNodelet of type vins_estimator/EstimatorNodelet to manager nodelet_manager'
[nodelet_manager-3] process has died [pid 7843, exit code -11, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=nodelet_manager __log:=/home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/nodelet_manager-3.log].
log file: /home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/nodelet_manager-3*.log
[EstimatorNodelet-4] process has died [pid 7844, exit code 255, cmd /opt/ros/melodic/lib/nodelet/nodelet load vins_estimator/EstimatorNodelet nodelet_manager __name:=EstimatorNodelet __log:=/home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/EstimatorNodelet-4.log].
log file: /home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/EstimatorNodelet-4*.log
环境为melodic+opencv3.4.12+ceres1.13.0+eigen3.3.4

运行检测不到物体

Service call failed: service [/yolo_service] responded with an error: error processing request: 'Upsample' object has no attribute 'recompute_scale_factor'

Failed to load nodelet

image

你好,我在运行时遇到了以上问题,单独运行yolo_ros正常。我已按照安装目录完成了ceres和Sophus库版本的修改,catkin_make一路绿灯。请问是什么问题呢?
image

Yolo v3 or v5?

Hi,
in the paper you mention it is yolov3, then in the credits of the readme you say yolov5 (and yolo_ros uses that afaik), but the weights are from v3. Which one is it? I am a bit confused.

The format of trajectory saved

Hello, I have run Dynamic-VINS on my platform successfully. However, I met a problem when I evaluate it.
I do not know how to save the estimation trajectory as the format of TUM. The saved *.csv file's format is different from the grountruth.txt of OpenLORIS.

Run error

When I run roslaunch vins_estimator openloris_vio_pytorch.launch, I encounter the following error:
image
Can you tell me how I should handle this.

Failed to load nodelet [/EstimatorNodelet] of type [vins_estimator/EstimatorNodelet] even after refreshing the cache

作者您好,当我运行时,出现如下问题:
提示我缺少libestimator_nodelet.so文件,但是我的/devel/lib/目录下已有该文件。我的运行环境为ubuntu20.04 ros noetic

image
[ INFO] [1679707768.981822341]: waitForService: Service [/yolo_service] has not been advertised, waiting...
[ INFO] [1679707768.984874317]: Loading nodelet /EstimatorNodelet of type vins_estimator/EstimatorNodelet to manager nodelet_manager_pc with the following remappings:
[ INFO] [1679707768.985121969]: /camera/color/image_raw -> /d400/color/image_raw
[ INFO] [1679707768.985669141]: waitForService: Service [/nodelet_manager_pc/load_nodelet] has not been advertised, waiting...
[ INFO] [1679707768.989190628]: Initializing nodelet with 20 worker threads.
[ INFO] [1679707769.006713921]: waitForService: Service [/nodelet_manager_pc/load_nodelet] is now available.
[ERROR] [1679707769.030456673]: Failed to load nodelet [/EstimatorNodelet] of type [vins_estimator/EstimatorNodelet] even after refreshing the cache: Failed to load library /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so: undefined symbol: _Z11DEPTH_TOPICB5cxx11)
[ERROR] [1679707769.036358328]: The error before refreshing the cache was: Failed to load library /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so: undefined symbol: _Z11DEPTH_TOPICB5cxx11)
[FATAL] [1679707769.056044454]: Failed to load nodelet '/EstimatorNodeletof typevins_estimator/EstimatorNodeletto managernodelet_manager_pc'
[EstimatorNodelet-5] process has died [pid 19624, exit code 255, cmd /opt/ros/noetic/lib/nodelet/nodelet load vins_estimator/EstimatorNodelet nodelet_manager_pc /camera/color/image_raw:=/d400/color/image_raw __name:=EstimatorNodelet __log:=/home/zhu/.ros/log/7c3e0fb6-caac-11ed-b95c-c1c703164ee1/EstimatorNodelet-5.log].
log file: /home/zhu/.ros/log/7c3e0fb6-caac-11ed-b95c-c1c703164ee1/EstimatorNodelet-5*.log
YOLOv5 🚀 e310fa9 torch 1.10.2+cu113 CUDA:0 (NVIDIA GeForce RTX 3060, 12036MiB)

yolo_ros Problem

I follow the step of NVIDIA devices installation, but I facing this problem about yolo_ros, do anyone know how to solve this poblem?

Cuda and Torch versions are as follows.

Thank you!!!

Screenshot from 2023-12-19 16-22-03
Screenshot from 2023-12-19 16-07-57
Screenshot from 2023-12-19 16-07-52

Rviz 显示报错

作者你好,我按文档配置运行:
roslaunch vins_estimator openloris_vio_pytorch.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play /home/yugui/code/catkin_ws/datas/cafe1-2.bag-imu.bag
之后,rviz中的Odometry中出现Status:Error错误,请问是什么原因?需要怎么去解决?

微信图片_20230806145709

ubuntu18.04环境配置

我这边是ros melodic+opencv3.2+cuda11.0,反复尝试之后还是看不到目标检测的结果。请问当时在ubuntu18.04下运行时使用的各种包的版本是多少?

Failed to load nodelet '/EstimatorNodelet of type vins_estimator/EstimatorNodelet

你好,我同样遇到了Failed to load nodelet '/EstimatorNodelet of type vins_estimator/EstimatorNodelet to manager nodelet_manager_pc'的问题,但是由于我是在NVIDIA NX上部署,NX使用SDK manager装机时自带了opencv4,安装ros-melodic时自带的opencv并没有安装,所以无法向常规的那样指定ros自带的opencv。我想询问的是:导致节点无法加载的具体原因是什么?自己下载opencv 3.2.0进行编译(ros-melodic中自带的版本),再在cv_bridge中修改可以吗?谢谢。

Initial orientation of the robot

Hi there.
I'm facing some issue replaying my bags.
Essentially after some time the system starts drifting when I use VIO.
With VO it does not happen.
My guess is that it's mainly related to the fact that the robot is not horizontal and static at the beginning of the experiment or maybe due to spikes that we have with the Imu. However, those spikes are coherent with the odometry (it's simulated data).
Is there a way to say "this is my initial position/orientation" to your system such that the gravity and the Imu locations are initialized correctly?

The main function of source code and the figure of OpenLORIS

Great work!
I met some problems when I read the paper and run it.

  1. How do you draw the below picture of OpenLORIS? We also want to draw a similar figure to evaluate the robustness of our algorithm.
    image

  2. Can you tell me where is the main function of the source code? I can not find the main() of estimator. Thank you very much.

predictPtsInNextFrame函数

作者您好,我在predictPtsInNextFrame函数有个疑问
void FeatureTracker::predictPtsInNextFrame(const Matrix3d &_relative_R)
{
predict_pts.resize(cur_pts.size());
for (unsigned int i = 0; i < cur_pts.size(); ++i)
{
Eigen::Vector3d tmp_P;
m_camera->liftProjective(Eigen::Vector2d(cur_pts[i].x, cur_pts[i].y), tmp_P);
Eigen::Vector3d predict_P = _relative_R * tmp_P;
Eigen::Vector2d tmp_p;
m_camera->spaceToPlane(predict_P, tmp_p);
predict_pts[i].x = tmp_p.x();
predict_pts[i].y = tmp_p.y();
}
}
这里的_relative_R是R21吗,你这里是对上一帧的归一化坐标乘以这个旋转,然后得到预测的坐标,之后你再将坐标转为像素坐标。我想问下_relative_R * tmp_P的意义,因为做变换时P2 = (R21*P1 + t21) 这个P1并不是归一化坐标,需要有深度信息

[bug] exception row->0 columns->

hi there

so sometimes (in VO mode) this lines throws an exception.

I've checked VINS FUSION and they check if m == 0 and set a param (valid) to false. That is then used in the estimator.cpp file (here here and here)

Did this ever happen to you?

No Image received

您好,当我按照文档运行的roslaunch的时候没有feature_image和sematic_image,请问是哪里出了问题吗??
2022-08-14 10-58-18屏幕截图

Experimental Results on market sequence of OpenLORIS dataset

I have run Dynamic-VINS on my own platform and evaluated it using openloris-scene-tools.
I compared our results with the results in the paper, and I found the result of market1-3 is not same as the paper. As shown below
image
image
The last line in the first figure is our result, the second figure is the results in orignal paper. As you can see, our result and VINS(both pinhole and fisheye) on market1-3 are inconsistent at the same position. However, the results in your paper is consistent. Can I achieve the effect in the paper by adjusting the parameters in *.yaml files?
The parameters we currently use that may be related to this are configured as follows:

#RGBD camera Ideal Range
depth_min_dist: 0.3
depth_max_dist: 3

frontend_freq: 30 # It should be raised in VO mode(without IMU).
keyframe_parallax: 10.0 # keyframe selection threshold (pixel); if system fails frequently, please try to reduce the "keyframe_parallax"  
num_grid_rows: 7
num_grid_cols: 8

#unsynchronization parameters
estimate_td: 1    ##########                  # online estimate time offset between camera and imu
td: 0.000                           # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#feature traker paprameters
max_cnt: 130           # max feature number in feature tracking. It is suggested to be raised in VO mode.
min_dist: 30            # min distance between two features
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 1             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.1          # accelerometer measurement noise standard deviation. 
gyr_n: 0.01         # gyroscope measurement noise standard deviation.   
acc_w: 0.0002         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 2.0e-5       # gyroscope bias random work noise standard deviation.     #4.0e-5

g_norm: 9.805       # gravity magnitude

#rolling shutter parameters
# rolling_shutter: 0                      # 0: global shutter camera, 1: rolling shutter camera
# rolling_shutter_tr: 0               # unit: s. rolling shutter read out time per frame (from data sheet)
rolling_shutter: 1                      # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0.033               # unit: s. rolling shutter read out time per frame (from data sheet)

The weighted predicted velocity

Hello, I have a new question about the weighted predicted velocity in Eq.(4) of your paper.
image
As we can see, the weighted predicted velocity of j+1 frame is updated by the weigthed predicted velocity and pixel velocity of j frame. In the source code, I found that the weighted predicted velocity of j frame is same as the pixel velocity.

if (temp_object_id > 0)
{
dynamic_objects[temp_object_id].x_vel =
x_center - dynamic_objects[temp_object_id].x_center;
dynamic_objects[temp_object_id].y_vel =
y_center - dynamic_objects[temp_object_id].y_center;

        dynamic_objects[temp_object_id].x_weight_vel = dynamic_objects[temp_object_id].x_vel;
        dynamic_objects[temp_object_id].y_weight_vel = dynamic_objects[temp_object_id].y_vel;
    }
    else
    {
        temp_object_id = ++object_id;
    }

    dynamic_objects[temp_object_id].x_center = x_center;
    dynamic_objects[temp_object_id].y_center = y_center;
    dynamic_objects[temp_object_id].x1       = x1;
    dynamic_objects[temp_object_id].y1       = y1;
    dynamic_objects[temp_object_id].x2       = x2;
    dynamic_objects[temp_object_id].y2       = y2;

    dynamic_objects[temp_object_id].x_weight_vel =
        (dynamic_objects[temp_object_id].x_weight_vel + dynamic_objects[temp_object_id].x_vel) /
        2;
    dynamic_objects[temp_object_id].y_weight_vel =
        (dynamic_objects[temp_object_id].y_weight_vel + dynamic_objects[temp_object_id].y_vel) /
        2;

Firstly, compute the pixel velocity dynamic_objects[temp_object_id].x_vel of the box in j frame. Secondly, update dynamic_objects[temp_object_id].x_weight_vel using dynamic_objects[temp_object_id].x_vel. Now, the pixel velocity and the weighted predicted velocity are same. Then, dynamic_objects[temp_object_id].x_weight_vel = (dynamic_objects[temp_object_id].x_weight_vel + dynamic_objects[temp_object_id].x_vel) / 2; seems just like x_weight_vel = (x_vel+x_vel)/2=x_vel. It seems does not change the value of x_weight_vel compared to dynamic_objects[temp_object_id].x_weight_vel = dynamic_objects[temp_object_id].x_vel;.
So, what is the function of the code? I am so confused about it.

About RPE

Hello, I notice that the graph in your artical contains RPE under the ATE. How can I add RPE into the graph using script? Can you provide your evaluate script for the graph? Thank you!
image

Result of Different Playing Rate

Hi, I tried to use the recorded rosbag to test this algorithm. The results are different when playing the rosbag with different playing rates. With a normal playing rate X 1.0, it has large drifts and often slows down or suddenly accelerates at some frames, especially when it detects more features. When playing rate X 0.2, the drift is quite small. What do you think of this problem?

undefined symbol: _ZN6Sophus3SO3C1Eddd

作者你好!

我在尝试按readme中的内容复现你的工作,目前遇到一个问题
在按如下指令运行cafe1-1数据集的时候出现报错undefined symbol: _ZN6Sophus3SO3C1Eddd(如下面截图)
看内容感觉与sophus有关,但sophus我也是按照git checkout a621ff 的要求安装的

希望能获得指点,感谢~

roslaunch vins_estimator openloris_vio_pytorch.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play ~/Dynamic_VINS/Dataset/OpenLORIS/cafe/cafe1-1.bag-imu.bag

image

yolo频率问题

你好,我单独运行yolo_ros可以实现30Hz发布/untracked_info, 但是同时运行vins只能实现10Hz发布/untracked_info,请问你知道这是什么原因吗?

测试环境是是I7和2080显卡,使用的是pytorch版yolo

指定yolo运行的gpu

您好,有多张显卡,程序默认是在gpu:0上运行,怎么修改到其他gpu上运行

NVIDIA Jeston Xavier NX 上部署的一些问题

作者您好,我尝试了在NVIDIA Jeston Xavier NX上部署Dynamic-VINS,相机使用的是Intel RealSense D435i,但目前发现的是图像的帧率比较低,其次是只要IMU很容易漂移。针对图像的帧率比较低的问题,我想问的是作者您有使用GPU加速吗,还是说原文中无人机使用的NVIDIA Jetson AGX Xavier 的CPU能力足够强没有这个问题。针对漂移问题,我使用的是D435i自带的IMU,会不会是因为D435i的IMU精度不够,又或是其他什么原因?希望作者能给予一定建议和传授经验。

HITSZ和THUSZ数据集下载

您好,
我按照您提供的数据集下载连接点进去是这个:
2023-08-07 15-13-54屏幕截图
然后点击hitsz00.bag弹出了另一个页面:
2023-08-07 15-13-33屏幕截图
但是bag并没有下载下来,请问我应该怎样操作把这些包全都下载下来呢?

Nothing appears in rviz

image
line 3 rosbag play YOUR_PATH_TO_DATASET/cafe.bag
i would like to know the path of cafe.bag,is cafe1-1.bag?or cafe1-2.bag?
thanks

运行campus数据集时没有显示feature_img

image
我把yolo需要的环境也配置了,但是只能追踪到imu数据没有相机的目标检测和特征点检测,请问一下作者如何得到如你们所传视频的可视化效果呢,谢谢

运行HITSZ & THUSZ Datasets出错

在跑hitsz_00.bag时,在运行roslaunch vins_estimator realsense_vio_campus.launch 的终端里出现
image
在rviz中有warning
image
请问这是什么原因呢?

imu in disorder

您好,OpenLORIS 数据按照scripts/merge_imu_topics.py合并imu数据后,运行会出现:imu message in disorder.是什么原因尼?

Failed to load nodelet '/EstimatorNodelet`

Excuse me, I tried to cut opencv3.4 and 4.3, but still got an error.

Failed to load nodelet '/EstimatorNodeletof typevins_estimator/EstimatorNodeletto managernodelet_manager_pc'

image

YOLO reshape array problem using ZED2

I'm facing an issue when using ZED2. I've configured various parameters, but upon execution, I encounter an error. The problem seems to be related to the image format conversion when YOLO frontend receives the image. Could you please provide any solutions or insights regarding this error?

Thank you!!

Below is my config and launch file:

%YAML:1.0

num_threads: 0  # 0  Use the max number of threads of your device.
                #    For some devices, like HUAWEI Atlas200, the auto detected max number of threads might not equivalent to the usable numble of threads. (Some cores(threads) might be reserved for other usage(NPU) by system.)
                # x  It is proper that 1 < x < MAX_THREAD.
                # For now, this parameter is relevant to grid-detector to run in parallel. 

#common parameters
imu: 1
static_init: 0 # fix_depth should be set 1 if static_init is set 1
imu_topic: "/zed/zed_node/imu/data"
image_topic: "/zed/zed_node/rgb/image_rect_color"
depth_topic: "/zed/zed_node/depth/depth_registered"
output_path: "/home/ericlai/testdynamicvins_ws/src/Dynamic-VINS/output"

#RGBD camera Ideal Range
depth_min_dist: 0.3
depth_max_dist: 6
fix_depth: 0    # 1: feature in ideal range will be set as constant 

frontend_freq: 30 # It should be raised in VO mode(without IMU).
num_grid_rows: 7
num_grid_cols: 8

#camera calibration
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 360
  #TODO modify distortion

distortion_parameters:
   k1: 0.0
   k2: 0.0
   p1: 0.0
   p2: 0.0
projection_parameters:
   fx: 277.9840393066406
   fy: 277.9840393066406
   cx: 326.0340576171875
   cy: 193.9207305908203

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                        # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.9999,-0.0120,-0.0025,
         0.0120,0.9999,-0.0028,
         0.0025,0.0028,1.0000] # ZED2 SDK Camera-IMU Transform

#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [0.0017, 0.0230, -0.0002] # ZED2 SDK Camera-IMU Transform

#feature traker paprameters
max_cnt: 150           # max feature number in feature tracking. It is suggested to be raised in VO mode.
min_dist: 25           # min distance between two features
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 0             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel); if system fails frequently, please try to reduce the "keyframe_parallax"  

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.5          # accelerometer measurement noise standard deviation. 
gyr_n: 0.3         # gyroscope measurement noise standard deviation.   
acc_w: 0.001         # accelerometer bias random work noise standard deviation. 
gyr_w: 0.0001       # gyroscope bias random work noise standard deviation.  

g_norm: 9.81       # gravity magnitude

#unsynchronization parameters
estimate_td: 1    ##########                  # online estimate time offset between camera and imu
td: 0.0                           # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#rolling shutter parameters
rolling_shutter: 0                      # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0.0               # unit: s. rolling shutter read out time per frame (from data sheet).

#loop closure parameters
loop_closure: 0                    # start loop closure
fast_relocalization: 0            # useful in real-time and large project
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/ericlai/testdynamicvins_ws/src/Dynamic-VINS/output/pose_graph" # save and load path

#visualization parameters
save_image: 0                   # enable this might cause crash; save image in pose graph for visualization prupose; you can close this function by setting 0
visualize_imu_forward: 0        # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4      # size of camera marker in RVIZ

#Only Consider Moving Objects
dynamic_label: ["person", "cat", "dog", "bicycle", "car","bus"]
<launch>
    <arg name="config_path" default="$(find vins_estimator)/../config/stereolabszed2/stereolabszed2.yaml" />
    <arg name="vins_path" default="$(find vins_estimator)/../config/../" />

    <remap from="/camera/color/image_raw" to="/zed/zed_node/rgb/image_rect_color" />
    <include file="$(find yolo_ros)/launch/yolo_service.launch">
    </include>

    <arg name="manager_name" default="nodelet_manager_pc" />
    <node pkg="nodelet" type="nodelet" name="$(arg manager_name)" args="manager" output="screen" />

    <node pkg="nodelet" type="nodelet" name="EstimatorNodelet" args="load vins_estimator/EstimatorNodelet $(arg manager_name)" output="screen">
        <param name="config_file" type="string" value="$(arg config_path)" />
        <param name="vins_folder" type="string" value="$(arg vins_path)" />
    </node>

    <!-- <node pkg="nodelet" type="nodelet" name="PoseGraphNodelet" args="load pose_graph/PoseGraphNodelet $(arg manager_name)" output="screen">
        <param name="config_file" type="string" value="$(arg config_path)"/>
        <param name="visualization_shift_x" type="int" value="0"/>
        <param name="visualization_shift_y" type="int" value="0"/>
        <param name="skip_cnt" type="int" value="0"/>
        <param name="skip_dis" type="double" value="0"/>
    </node> -->

</launch>

Output of Estimated Trajectory

Hi there. I want to evaluate the estimated pose with the ground truth pose over one entire trajectory. Is there any tool or function available already for this to output or save the pose result?

results about evaluation

作者你好,我在评估算法时遇到了一些问题。我更改了visualization.cpp 中的 void pubOdometry(const Estimator &estimator, const std_msgs::Header &header)文件,更改如下,以保证它可以输出和cafe-1-1 groundtruth.txt相同类型的数据。但是我发现S使用EVO评测工具时显示 时间戳并未对齐,我想请问下这个应该如何解决。同时我可以成功使用openloris工具评测ATE,但是该工具不提供RPE的评测,想问下论文中的RPE是使用EVO评测工具吗?
image

cafe-1-1-IMU.txt 评估结果
image
cafe-1-1 groundtruth.txt
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.