hitsz-nrsl / dynamic-vins Goto Github PK
View Code? Open in Web Editor NEW[RA-L 2022] RGB-D Inertial Odometry for a Resource-restricted Robot in Dynamic Environments
[RA-L 2022] RGB-D Inertial Odometry for a Resource-restricted Robot in Dynamic Environments
Excuse me.My question is as follows:
1.I run the office1-1 experiment, and use the evo tool with the -r full
parameterthe to get ATE RMSE
was 2.3, which was quite different from the results of the paper. I want to know whether ATE RMSE in Figure 6 is ATE obtained by considering both rotation and translation errors.In the evo tool, use the -r full
parameter to test. Is that righ.And how can I improve to make the results close.
2.As shown in figure 6 each value in the upper left corner, is there a tool to test every scene average percentage of correct values?
Thank you!
Related to #6 .
May I ask how it is managed when a failure is detected and what would be the best way to evaluate it?
Because from my experiments each failure bring the robot to 0,0,0 and reinit everything. However, at that point it is difficult to evaluate the trajectory.
Hi there, we're running a custom bag file.
My guess is that we need to change the focal length on the parameters.h file.
Still we are not able to run with the IMU since we continuosly get ROS_INFO("Not enough features or parallax; Move device around");
Any tip on how we can solve it, apart for considering another bag? Any parameter we should look into given that we run at 240 Hz the IMU and have a different camera?
Thanks for your help
使用的命令为:
roslaunch vins_estimator openloris_vio_pytorch.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play YOUR_PATH_TO_DATASET/market1-3.bag-imu.bag
Rviz中使用image选定话题/vins_estimator/semantic_mask,没有图像输出
报错信息如下:
[TensorRT] WARNING: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
bingding: data (3, 640, 640)
bingding: prob (6001, 1, 1)
batch size is 1
[FATAL] [1685328734.855312649]: Failed to load nodelet '/EstimatorNodelet
of type vins_estimator/EstimatorNodelet
to manager nodelet_manager'
[nodelet_manager-3] process has died [pid 7843, exit code -11, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=nodelet_manager __log:=/home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/nodelet_manager-3.log].
log file: /home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/nodelet_manager-3*.log
[EstimatorNodelet-4] process has died [pid 7844, exit code 255, cmd /opt/ros/melodic/lib/nodelet/nodelet load vins_estimator/EstimatorNodelet nodelet_manager __name:=EstimatorNodelet __log:=/home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/EstimatorNodelet-4.log].
log file: /home/wheeltec/.ros/log/b508d36a-fdcb-11ed-812a-5c879c1d9813/EstimatorNodelet-4*.log
环境为melodic+opencv3.4.12+ceres1.13.0+eigen3.3.4
Thank you for your great work.
I am wondering about IMU-aided feature tracking. I searched for this reference in your paper (Dynamic-VINS), but I couldn't find it.
Service call failed: service [/yolo_service] responded with an error: error processing request: 'Upsample' object has no attribute 'recompute_scale_factor'
Hi,
in the paper you mention it is yolov3, then in the credits of the readme you say yolov5 (and yolo_ros uses that afaik), but the weights are from v3. Which one is it? I am a bit confused.
作者你好,可以单独运行Yolo_ros对图像进行跟踪吗。我看文件yolo_ros的话题是/untracked_info,但是我在rviz中可视化时,报错
Hello, I have run Dynamic-VINS on my platform successfully. However, I met a problem when I evaluate it.
I do not know how to save the estimation trajectory as the format of TUM. The saved *.csv file's format is different from the grountruth.txt of OpenLORIS.
作者您好,当我运行时,出现如下问题:
提示我缺少libestimator_nodelet.so文件,但是我的/devel/lib/目录下已有该文件。我的运行环境为ubuntu20.04 ros noetic
[ INFO] [1679707768.981822341]: waitForService: Service [/yolo_service] has not been advertised, waiting...
[ INFO] [1679707768.984874317]: Loading nodelet /EstimatorNodelet of type vins_estimator/EstimatorNodelet to manager nodelet_manager_pc with the following remappings:
[ INFO] [1679707768.985121969]: /camera/color/image_raw -> /d400/color/image_raw
[ INFO] [1679707768.985669141]: waitForService: Service [/nodelet_manager_pc/load_nodelet] has not been advertised, waiting...
[ INFO] [1679707768.989190628]: Initializing nodelet with 20 worker threads.
[ INFO] [1679707769.006713921]: waitForService: Service [/nodelet_manager_pc/load_nodelet] is now available.
[ERROR] [1679707769.030456673]: Failed to load nodelet [/EstimatorNodelet] of type [vins_estimator/EstimatorNodelet] even after refreshing the cache: Failed to load library /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so: undefined symbol: _Z11DEPTH_TOPICB5cxx11)
[ERROR] [1679707769.036358328]: The error before refreshing the cache was: Failed to load library /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/zhu/SlamProjects/Dynamic_VINS_ws/devel/lib//libestimator_nodelet.so: undefined symbol: _Z11DEPTH_TOPICB5cxx11)
[FATAL] [1679707769.056044454]: Failed to load nodelet '/EstimatorNodeletof type
vins_estimator/EstimatorNodeletto manager
nodelet_manager_pc'
[EstimatorNodelet-5] process has died [pid 19624, exit code 255, cmd /opt/ros/noetic/lib/nodelet/nodelet load vins_estimator/EstimatorNodelet nodelet_manager_pc /camera/color/image_raw:=/d400/color/image_raw __name:=EstimatorNodelet __log:=/home/zhu/.ros/log/7c3e0fb6-caac-11ed-b95c-c1c703164ee1/EstimatorNodelet-5.log].
log file: /home/zhu/.ros/log/7c3e0fb6-caac-11ed-b95c-c1c703164ee1/EstimatorNodelet-5*.log
YOLOv5 🚀 e310fa9 torch 1.10.2+cu113 CUDA:0 (NVIDIA GeForce RTX 3060, 12036MiB)
我这边是ros melodic+opencv3.2+cuda11.0,反复尝试之后还是看不到目标检测的结果。请问当时在ubuntu18.04下运行时使用的各种包的版本是多少?
你好,我同样遇到了Failed to load nodelet '/EstimatorNodelet of type vins_estimator/EstimatorNodelet to manager nodelet_manager_pc'的问题,但是由于我是在NVIDIA NX上部署,NX使用SDK manager装机时自带了opencv4,安装ros-melodic时自带的opencv并没有安装,所以无法向常规的那样指定ros自带的opencv。我想询问的是:导致节点无法加载的具体原因是什么?自己下载opencv 3.2.0进行编译(ros-melodic中自带的版本),再在cv_bridge中修改可以吗?谢谢。
Hi there.
I'm facing some issue replaying my bags.
Essentially after some time the system starts drifting when I use VIO.
With VO it does not happen.
My guess is that it's mainly related to the fact that the robot is not horizontal and static at the beginning of the experiment or maybe due to spikes that we have with the Imu. However, those spikes are coherent with the odometry (it's simulated data).
Is there a way to say "this is my initial position/orientation" to your system such that the gravity and the Imu locations are initialized correctly?
Can I follow this link https://github.com/ultralytics/yolov5 and just update the weight file from my training results?
Great work!
I met some problems when I read the paper and run it.
作者您好,我在predictPtsInNextFrame函数有个疑问
void FeatureTracker::predictPtsInNextFrame(const Matrix3d &_relative_R)
{
predict_pts.resize(cur_pts.size());
for (unsigned int i = 0; i < cur_pts.size(); ++i)
{
Eigen::Vector3d tmp_P;
m_camera->liftProjective(Eigen::Vector2d(cur_pts[i].x, cur_pts[i].y), tmp_P);
Eigen::Vector3d predict_P = _relative_R * tmp_P;
Eigen::Vector2d tmp_p;
m_camera->spaceToPlane(predict_P, tmp_p);
predict_pts[i].x = tmp_p.x();
predict_pts[i].y = tmp_p.y();
}
}
这里的_relative_R是R21吗,你这里是对上一帧的归一化坐标乘以这个旋转,然后得到预测的坐标,之后你再将坐标转为像素坐标。我想问下_relative_R * tmp_P的意义,因为做变换时P2 = (R21*P1 + t21) 这个P1并不是归一化坐标,需要有深度信息
I am also don't have depth and want to evaluate Vins with yolo? could you please provide me your suggestions to edit in this code?
I have run Dynamic-VINS on my own platform and evaluated it using openloris-scene-tools.
I compared our results with the results in the paper, and I found the result of market1-3 is not same as the paper. As shown below
The last line in the first figure is our result, the second figure is the results in orignal paper. As you can see, our result and VINS(both pinhole and fisheye) on market1-3 are inconsistent at the same position. However, the results in your paper is consistent. Can I achieve the effect in the paper by adjusting the parameters in *.yaml files?
The parameters we currently use that may be related to this are configured as follows:
#RGBD camera Ideal Range
depth_min_dist: 0.3
depth_max_dist: 3
frontend_freq: 30 # It should be raised in VO mode(without IMU).
keyframe_parallax: 10.0 # keyframe selection threshold (pixel); if system fails frequently, please try to reduce the "keyframe_parallax"
num_grid_rows: 7
num_grid_cols: 8
#unsynchronization parameters
estimate_td: 1 ########## # online estimate time offset between camera and imu
td: 0.000 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#feature traker paprameters
max_cnt: 130 # max feature number in feature tracking. It is suggested to be raised in VO mode.
min_dist: 30 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 1 # if image is too dark or light, trun on equalize to find enough features
fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.1 # accelerometer measurement noise standard deviation.
gyr_n: 0.01 # gyroscope measurement noise standard deviation.
acc_w: 0.0002 # accelerometer bias random work noise standard deviation. #0.02
gyr_w: 2.0e-5 # gyroscope bias random work noise standard deviation. #4.0e-5
g_norm: 9.805 # gravity magnitude
#rolling shutter parameters
# rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
# rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet)
rolling_shutter: 1 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0.033 # unit: s. rolling shutter read out time per frame (from data sheet)
Hello, I have a new question about the weighted predicted velocity in Eq.(4) of your paper.
As we can see, the weighted predicted velocity of j+1 frame is updated by the weigthed predicted velocity and pixel velocity of j frame. In the source code, I found that the weighted predicted velocity of j frame is same as the pixel velocity.
if (temp_object_id > 0)
{
dynamic_objects[temp_object_id].x_vel =
x_center - dynamic_objects[temp_object_id].x_center;
dynamic_objects[temp_object_id].y_vel =
y_center - dynamic_objects[temp_object_id].y_center;dynamic_objects[temp_object_id].x_weight_vel = dynamic_objects[temp_object_id].x_vel; dynamic_objects[temp_object_id].y_weight_vel = dynamic_objects[temp_object_id].y_vel; } else { temp_object_id = ++object_id; } dynamic_objects[temp_object_id].x_center = x_center; dynamic_objects[temp_object_id].y_center = y_center; dynamic_objects[temp_object_id].x1 = x1; dynamic_objects[temp_object_id].y1 = y1; dynamic_objects[temp_object_id].x2 = x2; dynamic_objects[temp_object_id].y2 = y2; dynamic_objects[temp_object_id].x_weight_vel = (dynamic_objects[temp_object_id].x_weight_vel + dynamic_objects[temp_object_id].x_vel) / 2; dynamic_objects[temp_object_id].y_weight_vel = (dynamic_objects[temp_object_id].y_weight_vel + dynamic_objects[temp_object_id].y_vel) / 2;
Firstly, compute the pixel velocity dynamic_objects[temp_object_id].x_vel
of the box in j frame. Secondly, update dynamic_objects[temp_object_id].x_weight_vel
using dynamic_objects[temp_object_id].x_vel
. Now, the pixel velocity and the weighted predicted velocity are same. Then, dynamic_objects[temp_object_id].x_weight_vel = (dynamic_objects[temp_object_id].x_weight_vel + dynamic_objects[temp_object_id].x_vel) / 2;
seems just like x_weight_vel = (x_vel+x_vel)/2=x_vel
. It seems does not change the value of x_weight_vel compared to dynamic_objects[temp_object_id].x_weight_vel = dynamic_objects[temp_object_id].x_vel;
.
So, what is the function of the code? I am so confused about it.
Hi, I tried to use the recorded rosbag to test this algorithm. The results are different when playing the rosbag with different playing rates. With a normal playing rate X 1.0, it has large drifts and often slows down or suddenly accelerates at some frames, especially when it detects more features. When playing rate X 0.2, the drift is quite small. What do you think of this problem?
作者你好!
我在尝试按readme中的内容复现你的工作,目前遇到一个问题
在按如下指令运行cafe1-1数据集的时候出现报错undefined symbol: _ZN6Sophus3SO3C1Eddd(如下面截图)
看内容感觉与sophus有关,但sophus我也是按照git checkout a621ff 的要求安装的
希望能获得指点,感谢~
roslaunch vins_estimator openloris_vio_pytorch.launch
roslaunch vins_estimator vins_rviz.launch
rosbag play ~/Dynamic_VINS/Dataset/OpenLORIS/cafe/cafe1-1.bag-imu.bag
你好,我单独运行yolo_ros可以实现30Hz发布/untracked_info, 但是同时运行vins只能实现10Hz发布/untracked_info,请问你知道这是什么原因吗?
测试环境是是I7和2080显卡,使用的是pytorch版yolo
您好,有多张显卡,程序默认是在gpu:0上运行,怎么修改到其他gpu上运行
作者您好,我尝试了在NVIDIA Jeston Xavier NX上部署Dynamic-VINS,相机使用的是Intel RealSense D435i,但目前发现的是图像的帧率比较低,其次是只要IMU很容易漂移。针对图像的帧率比较低的问题,我想问的是作者您有使用GPU加速吗,还是说原文中无人机使用的NVIDIA Jetson AGX Xavier 的CPU能力足够强没有这个问题。针对漂移问题,我使用的是D435i自带的IMU,会不会是因为D435i的IMU精度不够,又或是其他什么原因?希望作者能给予一定建议和传授经验。
您好,OpenLORIS 数据按照scripts/merge_imu_topics.py合并imu数据后,运行会出现:imu message in disorder.是什么原因尼?
I'm facing an issue when using ZED2. I've configured various parameters, but upon execution, I encounter an error. The problem seems to be related to the image format conversion when YOLO frontend receives the image. Could you please provide any solutions or insights regarding this error?
Thank you!!
Below is my config and launch file:
%YAML:1.0
num_threads: 0 # 0 Use the max number of threads of your device.
# For some devices, like HUAWEI Atlas200, the auto detected max number of threads might not equivalent to the usable numble of threads. (Some cores(threads) might be reserved for other usage(NPU) by system.)
# x It is proper that 1 < x < MAX_THREAD.
# For now, this parameter is relevant to grid-detector to run in parallel.
#common parameters
imu: 1
static_init: 0 # fix_depth should be set 1 if static_init is set 1
imu_topic: "/zed/zed_node/imu/data"
image_topic: "/zed/zed_node/rgb/image_rect_color"
depth_topic: "/zed/zed_node/depth/depth_registered"
output_path: "/home/ericlai/testdynamicvins_ws/src/Dynamic-VINS/output"
#RGBD camera Ideal Range
depth_min_dist: 0.3
depth_max_dist: 6
fix_depth: 0 # 1: feature in ideal range will be set as constant
frontend_freq: 30 # It should be raised in VO mode(without IMU).
num_grid_rows: 7
num_grid_cols: 8
#camera calibration
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 360
#TODO modify distortion
distortion_parameters:
k1: 0.0
k2: 0.0
p1: 0.0
p2: 0.0
projection_parameters:
fx: 277.9840393066406
fy: 277.9840393066406
cx: 326.0340576171875
cy: 193.9207305908203
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [0.9999,-0.0120,-0.0025,
0.0120,0.9999,-0.0028,
0.0025,0.0028,1.0000] # ZED2 SDK Camera-IMU Transform
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [0.0017, 0.0230, -0.0002] # ZED2 SDK Camera-IMU Transform
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking. It is suggested to be raised in VO mode.
min_dist: 25 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 0 # if image is too dark or light, trun on equalize to find enough features
fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel); if system fails frequently, please try to reduce the "keyframe_parallax"
#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.5 # accelerometer measurement noise standard deviation.
gyr_n: 0.3 # gyroscope measurement noise standard deviation.
acc_w: 0.001 # accelerometer bias random work noise standard deviation.
gyr_w: 0.0001 # gyroscope bias random work noise standard deviation.
g_norm: 9.81 # gravity magnitude
#unsynchronization parameters
estimate_td: 1 ########## # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#rolling shutter parameters
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0.0 # unit: s. rolling shutter read out time per frame (from data sheet).
#loop closure parameters
loop_closure: 0 # start loop closure
fast_relocalization: 0 # useful in real-time and large project
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/ericlai/testdynamicvins_ws/src/Dynamic-VINS/output/pose_graph" # save and load path
#visualization parameters
save_image: 0 # enable this might cause crash; save image in pose graph for visualization prupose; you can close this function by setting 0
visualize_imu_forward: 0 # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4 # size of camera marker in RVIZ
#Only Consider Moving Objects
dynamic_label: ["person", "cat", "dog", "bicycle", "car","bus"]
<launch>
<arg name="config_path" default="$(find vins_estimator)/../config/stereolabszed2/stereolabszed2.yaml" />
<arg name="vins_path" default="$(find vins_estimator)/../config/../" />
<remap from="/camera/color/image_raw" to="/zed/zed_node/rgb/image_rect_color" />
<include file="$(find yolo_ros)/launch/yolo_service.launch">
</include>
<arg name="manager_name" default="nodelet_manager_pc" />
<node pkg="nodelet" type="nodelet" name="$(arg manager_name)" args="manager" output="screen" />
<node pkg="nodelet" type="nodelet" name="EstimatorNodelet" args="load vins_estimator/EstimatorNodelet $(arg manager_name)" output="screen">
<param name="config_file" type="string" value="$(arg config_path)" />
<param name="vins_folder" type="string" value="$(arg vins_path)" />
</node>
<!-- <node pkg="nodelet" type="nodelet" name="PoseGraphNodelet" args="load pose_graph/PoseGraphNodelet $(arg manager_name)" output="screen">
<param name="config_file" type="string" value="$(arg config_path)"/>
<param name="visualization_shift_x" type="int" value="0"/>
<param name="visualization_shift_y" type="int" value="0"/>
<param name="skip_cnt" type="int" value="0"/>
<param name="skip_dis" type="double" value="0"/>
</node> -->
</launch>
could you please provide us with the ground truth or the RTK-GPS trajectories -if any- for the HITSZ & THUSZ Datasets ?
Hi there. I want to evaluate the estimated pose with the ground truth pose over one entire trajectory. Is there any tool or function available already for this to output or save the pose result?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.