Giter VIP home page Giter VIP logo

viso2's Introduction

viso2

ROS Stack containing a wrapper for libviso2, a visual odometry library. http://www.ros.org/wiki/viso2 for the list of contained packages.


Omnidirectional version

  • This package contains an additional version of the original Libviso2 that works with a monocular omnidirectional camera system.

Dependencies

  • The only additional dependency of this version is the camera calibration toolbox Ocamcalib for Matlab.
    • Instead of using a normal pinhole camera calibration model, the method uses the unified camera model proposed by Davide Scaramuzza.
    • Download the toolbox from here, put it in your Matlab workspace and execute the ocam_calib command to run the application.
    • After calibrating the camera save the generated text file that will be used by the method.

How to execute

  • To execute omnidirectional Libviso2 make sure you have a camera node publishing over ROS.
  • After that you can use the demo launch file that we provide in the following way:
roslaunch viso2_ros demo.launch
  • Make sure that first you set the file calibration path correctly by changing the calib_path parameter.
  • Also you can edit/add the default parameters that Libviso2 use (don't forget that camera height and pitch are mandatory to scale calculation).

Performance test

  • If you want to check the performance of the method before using it in your own system you can use the bag file that we provide here.
Available topics on the bag file
  1. perspective_cam/image/compressed contains a perspective camera sequence that can be used by the default monocular Libviso2 version to benchmark.
  2. fisheye_cam/image/compressed contains the same sequence captured by a fisheye camera.
  3. slam_out_pose contains the camera motion estimation computed by Hector SLAM method.
  4. /husky_velocity_controller/odom constains conventional odometry motion estimation.
  5. /fix constains GPS data.
  6. /map contains a map of the environment designed using a laser.
  7. (...) and others.
Running the test sequences
  • To run both test sequences you can execute the following commands after setting the proper paths for the bag file and the calibration file.
roslaunch viso2_ros perspective.launch
roslaunch viso2_ros fisheye.launch

NOTE: We provide the camera calibration file for the fisheye camera ('data/calib_results.txt'). You have to set the correct path to the rosbag file in both launch files and the calibration text file in the fisheye.launch. We also provide a rviz configuration file to easy visualization of the results.

Acknowledges

This work is co-financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 under the PORTUGAL 2020 Partnership Agreement, and through the Portuguese National Innovation Agency (ANI) as a part of project «ROMOVI: POCI-01-0247-FEDER-017945»".

viso2's People

Contributors

aaguiar96 avatar bomiquel avatar eborghi10 avatar limhyon avatar miquelmassot avatar papr avatar phil0stine avatar sksavant avatar stwirth avatar timonegk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

viso2's Issues

Bag files not accessable

The link of bag files does not work. ftp://opt.uib.es/bagfiles/viso2_ros/
The server can not be connected. Could you please fix this issue? Thanks.

Yanming

Ideas about remappings

The way stereo node subscribes to image topics is good for command-line usage, but it is not very convenient if one needs to launch it from a launch file inside a namespace.

All of the improvements described below are applicable to the FOVIS wrapper too.

First issue

Consider a scenario: I have a namespace /stereo with a bunch of relevant nodes inside, like stereo_image_proc or camera drivers. It is preferred to keep all stereo-related nodes inside the same namespace, so I'd like to add stereo_odometer here too.

This is what I get in this case:

$ ./stereo_odometer __ns:=/stereo

---
$ rosnode info /stereo/stereo_odometer 
# (not so interesting)
 * /stereo/stereo/right/stereo/image [unknown type]
 * /stereo/stereo/right/camera_info [unknown type]
 * /tf [tf/tfMessage]
 * /stereo/stereo/left/camera_info [unknown type]
 * /stereo/stereo/left/stereo/image [unknown type]

As you can see, the subscriptions were pushed down to the namespace /stereo/stereo, and this is not correct.

First suggestion

Use rather standard ROS API, subscribing to the topics [left|right]/image, [left|right]/camera_info.

If a user needs to use the node from command line, it is always possible to use a namespace, like that: ./stereo_odometer __ns:=/stereo, or like that: export ROS_NAMESPACE=/stereo && ./stereo_odometer.

Second issue

In the scenario above it may be necessary to remap the image topics from image to something rather usable, like image_rect. In the most straightforward way it won't work:

$ ./stereo_odometer __ns:=/stereo image:=image_rect

---
$ rosnode info /stereo/stereo_odometer 
# (not so interesting)
 * /stereo/stereo/right/stereo/image_rect [unknown type]
 * /stereo/stereo/right/camera_info [unknown type]
 * /tf [tf/tfMessage]
 * /stereo/stereo/left/camera_info [unknown type]
 * /stereo/stereo/left/stereo/image_rect [unknown type]

Remapping from image will be appended with current namespace, so we get stereo/image_rect instead of just image_rect.

Second suggestion

Use simple subscriptions to [left|right]/image_rect, [left|right]/camera_info. This is not very convenient for command-line usage, but it works with namespaces. Besides, VISO2 needs only image_rect anyway, this name is sort of standartized.

See features

Hi,
Is there any way to see the features found on the image? I am trying to debug my system, it seems that the camera focus might be the problem.
Thanks

Remove Andreas' email from manifest/package.xml

Andreas gets emails from Willows build server when documentation building fails and he doesn't want them. We should put only our addresses as we are the authors and maintainers of this package.

Segmentation Fault

Hi, I am building a SLAM robot using RatSLAM and was thinking about integrating stereo viso2 odometry into it but I am getting segmentation faults at the function divconqrecurse(mesh*, behavior*, float**, int, int, otri*, otri*) (), the gdb backtrace shows a dozen divconqrecurse function calls. Can you check this out?

Log file:
log.txt

My setup is Raspberry Pi Compute Module 3 running Ubuntu 16.04, ROS kinetic with stereo v1.3 Cameras each taking 640x480 images at 10fps using raspicam lib. The images are sent to my laptop running Ubuntu 16.04, ROS Kinetic over a LAN Wi-Fi connection.
The bgr8 images are first encoded to jpeg before transfer to reduce transfer time.

The Pi publishes

sensor_msgs/CompressedImage at
/agnostobot/left/image/compressed
/agnostobot/right/image/compressed

and

sensor_msgs/CameraInfo at
/agnostobot/left/camera_info
/agnostobot/right/camera_info

The images are then converted back to raw using image_transport
and published at

/rectify/left/image_raw
/rectify/right/image_raw

camera_info messages are remapped to

/rectify/left/camera_info
/rectify/right/camera_info

stereo_image_proc is used to rectify the images. Then ExactTime policy is used to synchronize rectified images and camera_info messages and republish them for viso2_ros stereo_odometer. This was done since camera_info messages were delivered to viso2 earlier than image messages which were still being rectified.

images are published at

/stereo/left/image_rect
/stereo/right/image_rect

and camera_info at

/stereo/left/camera_info
stereo/right/camera_info

the left camera image and camera_info message header frame_id are set at

/agnostobot_left_camera

and the right camera image and camera_info message header frame_id are set at

/agnostobot_right_camera

static_transform_publisher with args="0.1075 0.0865 0.0215 0 0 0 /base_link /agnostobot_left_camera 100" is used to provide transform from /base_link -> /agnostobot_left_camera

static_transform_publisher with args="0.1075 -0.0865 0.0215 0 0 0 /base_link /agnostobot_right_camera 100" is used to provide transform from /base_link -> /agnostobot_right_camera

im also using chrony server on the laptop which is used by the pi for syncing time since that was being a problem too with "future transform" errors.

possible issues but could be unrelated

  1. my tf coordinate frames are not optical frame, my x is forward, y is left and z is up which is not what viso2 want but im new to tf and not sure how to merge both logics.

  2. another issue is that im encoding and decoding before rectification, that shouldn't be an issue, right?

  3. my robot is not built yet, I was just moving my raspberry pi case mounted with cameras around the room, i expect to get false or no odometry data but not for the lib to crash

  4. whats this about, the parameter isn't mentioned in the docs
    'stereo' has not been remapped! Example command-line usage:
    $ rosrun viso2_ros stereo_odometer stereo:=narrow_stereo image:=image_rect

If you need any more info, let me know. Also let me know if I'm making any mistakes here. Waiting for your response.
Thanks

Links:

pi camera image publisher
launch file
message synchonizer

Catkin build uses opencv 2.4.8 instead of 3

Hello, i'm compiling viso2 on Raspberry pi 3, but this error comes out every time:
No rule to make target '/usr/lib/arm-linux-gnueabihf/libopencv_video.so.2.4.8', needed by '/home/pi/eureca/devel/.private/viso2_ros/lib/viso2_ros/mono_odometer' , for every opencv library .
I specified opencv3 in the CMakefiles but it always complains about the missing file.
I also recompiled cv_bridge with opencv 3 just to be sure but it wont work.
How can i use opencv3 ?
Thanks

Tiny pre-processor bug

In stereo_processor.h, you have:

#ifndef STEREO_PROCESSOR_H_
#define STEREO_PRECESSOR_H_

where the #define has a typo and should be:

STEREO_PROCESSOR_H_

Cannot build viso2

I've tried to build viso2, installed all dependencies, but still are getting error 'Invoking "make -j4 -l4" failed' while building stereo_odometer. Before it there was an error ‘shared_ptr’ in namespace ‘std’ does not name a template type. Could you help me, please? Is it a problem with image_geometry or something else?

libviso2 ros

Stephan,
do you think it is possible to add the option replace=true as a ros parameter to set at running time? Is it even possible to set parameters without changing those in the program? If yes in which way?
Thanks
Giuseppe

Visual odometry lost and Segmantation fault

This is the error which occurs during runtime of viso2. Camera is calibrated using camera_calibration cameracalibrator.py .
Configuration:
OS: Ubuntu 14.04
ROS distro: Indigo
Camera: Logitech c270
Camera Driver: uvc_camera

[sidd@sidd-K55VM:~/catkin_ws$ rosrun viso2_ros stereo_odometer _approximate_sync:=True  stereo:=stereo image:=image_rect
[ INFO] [1459528291.959742716]: Subscribing to:
    * /stereo/left/image_rect
    * /stereo/right/image_rect
    * /stereo/left/camera_info
    * /stereo/right/camera_info
[ INFO] [1459528292.086546853]: Basic Odometer Settings:
  odom_frame_id      = /odom
  base_link_frame_id = /base_link
  publish_tf         = true
  invert_tf          = false
[ INFO] [1459528292.534461262]: Initialized libviso2 stereo odometry with the following parameters:
Calibration parameters:
  f  = 953.869
  cu = 679.641
  cv = 239.209
Matcher parameters:
  nms_n                  = 3
  nms_tau                = 50
  match_binsize          = 50
  match_radius           = 200
  match_disp_tolerance   = 2
  outlier_disp_tolerance = 5
  outlier_flow_tolerance = 5
  multi_stage            = 1
  half_resolution        = 1
  refinement             = 1
Bucketing parameters:
  max_features  = 2
  bucket_width  = 50
  bucket_height = 50
Stereo odometry parameters:
  base             = 0.386787
  ransac_iters     = 200
  inlier_threshold = 2
  reweighting      = 1
  queue_size = 5
  approximate_sync = 1
  ref_frame_change_method = 0
  ref_frame_motion_threshold = 5
  ref_frame_inlier_threshold = 150
[ WARN] [1459528292.586841816]: The tf from '/base_link' to 'camera' does not seem to be available, will assume it as identity!
[ WARN] [1459528292.722457090]: Visual Odometer got lost!
[ WARN] [1459528302.841250915]: Visual Odometer got lost!
[ WARN] [1459528312.914954366]: Visual Odometer got lost!
[ WARN] [1459528323.099015192]: Visual Odometer got lost!
[ WARN] [1459528333.212741587]: Visual Odometer got lost!
[ WARN] [1459528343.409453980]: Visual Odometer got lost!
[ WARN] [1459528353.778834532]: Visual Odometer got lost!
[ WARN] [1459528363.782220476]: Visual Odometer got lost!
[ WARN] [1459528373.993243490]: Visual Odometer got lost!
[ WARN] [1459528384.164382081]: Visual Odometer got lost!
Segmentation fault (core dumped)]

We are publishing image_rect and the Odometer is subscribing to it and camera_info. We can't seem to figure out why we are getting a segfault.

Synchronization Problem

Hi,

I've been trying to run your code with command,

rosrun viso2_ros stereo_odometer stereo:=vizzy image:=image_rect_color _approximimate_sync:=True

but I'm getting the following warning:

[ WARN] [1426508496.056623314, 225.349000000]: [stereo_processor] Low number of synchronized left/right/left_info/right_info tuples received.
Left images received: 16 (topic '/vizzy/left/image_rect_color')
Right images received: 15 (topic '/vizzy/right/image_rect_color')
Left camera info received: 16 (topic '/vizzy/left/camera_info')
Right camera info received: 16 (topic '/vizzy/right/camera_info')
Synchronized tuples: 0
Possible issues:
* stereo_image_proc is not running.
Does rosnode info /stereo_odometer show any connections?
* The cameras are not synchronized.
Try restarting the node with parameter _approximate_sync:=True
* The network is too slow. One or more images are dropped from each tuple.
Try restarting the node, increasing parameter 'queue_size' (currently 5)

I don't know how to synchronize the camera's topics, I tried several codes that I found in ROS answers but I wasn't successful. The link to code:

http://answers.ros.org/question/9705/synchronizer-and-image_transportsubscriber/

Can you help me friend. Thank so much.

mono vo gets poor velocity estimate

hi .I have got the mono vo worked on my odroid.
But the result seems to be poor especially when the vehicle is turning.

I have done the camera calibration as the instruction.
I also set the camera pitch, height, the tf from base_link to camera.
I don't set other parameters.
I put a lot of pictures on ground to add more features.

The result of vo is only 1.8 hz. The rate of images which feed into the mono vo is 30Hz.

I wonder what else i can do to improve the performance of mono vo? like how to tuning the parameters?

And if the poor performance is due to the low frequency?

viso2 with elas_ros

hello.I am sorry to disturb you.But I cannot launch the viso2 with elas_ros successfully .I need your help.
The response as follows:
[ WARN] [1536650664.048094195, 1428920107.712918255]: The tf from '/base_link' to 'stereo_forward_optical' does not seem to be available, will assume it as identity! terminate called after throwing an instance of 'tf2::LookupException'362 what(): "stereo_forward_optical" passed to lookupTransform argument source_frame does not exist. [pc_construction-5] process has died [pid 20887, exit code -6, cmd /home/ppg/elas_ws/devel/lib/elas_ros/pc_construction frame_data:=/elas_ros/frame_data pose:=/stereo_odometer/pose base_frame_id:=/odom pose_frame_id:=/base_link __name:=pc_construction __log:=/home/ppg/.ros/log/df920058-b590-11e8-9d06-e0d55e489afd/pc_construction-5.log]. log file: /home/ppg/.ros/log/df920058-b590-11e8-9d06-e0d55e489afd/pc_construction-5*.log
I didn't have the urdf ,so I just comment it.
I would be very grateful to you if you could answer me.
Best Wishes!

Changes on the package name

I am a little confused by the changes on package names. What is the purpose of these changes? Are there any big changes in near future?

I am sorry if this is not the right place for this kind of discussion.

Best regards
Yanming

Variance vs. standard deviation

Consider these lines.

Since variance has units that are the square of the units of variable, the comment at line 22 is confusing. It should be like 0.1m^2 linear variance 10deg^2. angular variance, or every member of the matrices below should be squared.

I suppose that the second assumption is correct?

Running on ARM

Any chance that viso2 can run on an ARM board like Odroids ?
I can see that some functions heavily rely on SSE functions, can this be de-tangled to produce the same output or is it going to be too had?

Building under ROS fuerte

Hi, I've had viso2_ros building and running fine under electric. Switched over to fuerte yesterday and I get the following error when I try to build:

[ 50%] Building CXX object CMakeFiles/mono_odometer.dir/src/mono_odometer.o
Linking CXX executable bin/mono_odometer
/usr/bin/ld: CMakeFiles/mono_odometer.dir/src/mono_odometer.o: undefined reference to symbol 'cv::fastFree(void_)'
/usr/bin/ld: note: 'cv::fastFree(void_)' is defined in DSO /opt/ros/fuerte/lib/libopencv_core.so so try adding it to the linker command line
/opt/ros/fuerte/lib/libopencv_core.so: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
make[2]: *** [bin/mono_odometer] Error 1
make[1]: *** [CMakeFiles/mono_odometer.dir/all] Error 2

Is this something to do with the way fuerte now releases opencv in the /opt/ros/fuerte/lib directory, and should I be linking against this version instead of the external opencv that electric must have used?

Cheers,
Liz

viso2_ros cannot obtain the reconstruction

Hi,
I am using viso2_ros with ROS indigo, and the demo.launch seems work.
However when I use rviz with PointCloud2(stereo_odometer/point_cloud), I get the Transform error: For frame [stereo_forward_optical]: Frame [stereo_forward_optical] does not exist.

screenshot from 2018-01-04 11_30_08

Can you give me some advice to solve this problem? Thanks in advance.

Lot of Features located in edges.

Hello,

I'm using your code, but also using the rejected outliers. So I found that there are some features located at edges. Once they are only matched points, I think that this shouldn't be possible. It is used SIFT, right? In this way, I think it should have a "elimination of keypoints located in edges".

I am thinking that changing some threshold, maybe the relative to matches one, would avoid this situation. I'm right?

I would appreciate your help. Thanks.

Pose change without camera movement

I'm running the following configuration.

Two logitech c270 cameras separated by ~14cm.
After camera calibration we get an epipolar error of 0.70 pixels approximately.
I'm getting a reasonable point cloud and am including photos of the configuration.

viso2issue

I am checking the pose using a node which is making a log file. Here's the code for it.

#!/usr/bin/env python
import os
import rospy
import rospkg
import numpy as np
#from python_qt_binding import loadUi
#from python_qt_binding.QtGui import QWidget
#from python_qt_binding.QtCore import QTimer, Slot
#from svo_msgs.msg import Info
#from svo_msgs.msg import Feature
#from std_msgs.msg import String
from geometry_msgs.msg import PoseWithCovarianceStamped, Pose, PoseStamped
import geometry_msgs

from datetime import datetime
logs_path = '/home/niteesh/viso2logs'


ns = 'stereo_odometer'

def i_cb(data,fw):
  rospy.loginfo('Viso2 is tracking')
  #posestampd = data.pose
  pose_ = data.pose
  posep = pose_.position
  x = posep.x
  y = posep.y
  z = posep.z
  rospy.loginfo('x:%f, y:%f, z:%f' % (x, y, z) )
  fw.write('x:%f, y:%f, z:%f\n' % (x, y, z))




def listener():
    rospy.init_node('pose_listener', anonymous=True)
    rospy.loginfo('Trying to subscribe')
    dt_str = str(datetime.now())
    logfile = os.path.join(logs_path, dt_str[0:10]+'-'+dt_str[11:13]+'-'+dt_str[14:16]+'-'+dt_str[17:19]+'.txt')
    fw = open(logfile, 'w')
    try:
      rospy.Subscriber(ns+'/pose', PoseStamped, i_cb, fw)
    except Exception,e:
      rospy.loginfo(e)

    rospy.loginfo('Not getting any information from topic')
    # spin() simply keeps python from exiting until this node is stopped
    rospy.spin()


if __name__ == '__main__':
    listener()

Without any movement of the camera, the logger shows that the x coordinate increases with time. The odometer does not get lost either.

Stereo Odometer, free(): invalid pointer error ROS Kinetic

I'm running ROS Kinetic on an Odroid XU4, using two Flea3 GigE cameras as input. I followed the build procedure for viso2, and ran into the same -mmse3 error that others were finding (and the other issues listed on that issue thread due to the ARM processor as well). Now I have successfully built, but I am getting this error when I try to start up a stereo_odometer node.

Command: rosrun viso2_ros stereo_odometer image:=image_rect

This is the error output.

[ INFO] [1455216526.819296799]: Subscribing to:
* /stereo/left/image_rect
* /stereo/right/image_rect
* /stereo/left/camera_info
* /stereo/right/camera_info
[ INFO] [1455216527.258145677]: Basic Odometer Settings:
odom_frame_id = odom
base_link_frame_id = base_link
publish_tf = true
invert_tf = false
[ INFO] [1455216528.103928295]: Initialized libviso2 stereo odometry with the following parameters:
Calibration parameters:
f = 694.907
cu = 333.242
cv = 266.244
Matcher parameters:
nms_n = 3
nms_tau = 50
match_binsize = 50
match_radius = 200
match_disp_tolerance = 2
outlier_disp_tolerance = 5
outlier_flow_tolerance = 5
multi_stage = 1
half_resolution = 1
refinement = 1
Bucketing parameters:
max_features = 2
bucket_width = 50
bucket_height = 50
Stereo odometry parameters:
base = 0.101805
ransac_iters = 200
inlier_threshold = 2
reweighting = 1
queue_size = 5
approximate_sync = 1
ref_frame_change_method = 0
ref_frame_motion_threshold = 5
ref_frame_inlier_threshold = 150
*** Error in `/home/odroid/catkin_ws/devel/lib/viso2_ros/stereo_odometer': free(): invalid pointer: 0xafd8e020 ***
Aborted

One time I also got the following message at the end:

*** Error in `/home/odroid/catkin_ws/devel/lib/viso2_ros/stereo_odometer': corrupted size vs. prev_size: 0x00a3bc70 ***
Aborted

Does anyone have any ideas for how to fix this? Thanks in advance.

Can use high distorted lens cam?

I have high distorted lens camera.
And its resolution is just VGA(640x480).
Can use this camera for visual odometry, mono or stereo?

Paper related to Mono odometry.

Hi,I'm interested in learning more deeply about how it works Viso2 (specialty Mono_odometry). I will appreciate if anyone can give a link or recommend me some publications.

Thank you

rviz can't subscribe the output message /stereo_odometer/point_cloud

In the source code, the PointCloud type is defined as pcl::PointCloudpcl::PointXYZRGB by
typedef pcl::PointCloudpcl::PointXYZRGB PointCloud; (line 68, stereo_odometer.cpp)
The compiler thinks it is sensor_msgs::PointCloud2.
But there are some differences between these 2 types: sensor_msgs::PointCloud2 and pcl::PointCloudpcl::PointXYZRGB.
So rviz can't receive the message when I add the "PointCloud2" display and subscribe "stereo_odometer/point_cloud" in rviz. And the return of point_cloud_pub_.getNumSubscribers() (line 194) in stereo_odometer.cpp is always 0.

I think the above is the reason why I can't show the point cloud in rviz. Am I right? Thanks a lot.

Best regards
Yanming

Points cloud topic (stereo version)

Is ti possible to publish also a topic with the points cloud in the body fixed (current) frame and in the base frame (optionally)?

thank for the great work!!!

Compile error - bullet port to tf

Hi,

I got the compile error while compile viso2_ros:

In file included from /home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:10:0:
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/odometer_base.h: In member function ‘void viso2_ros::OdometerBase::integrateAndPublish(const tf::Transform&, const ros::Time&)’:
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/odometer_base.h:153:9: error: ‘btScalar’ was not declared in this scope
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/odometer_base.h:153:18: error: expected ‘;’ before ‘angle’
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/odometer_base.h:155:44: error: ‘angle’ was not declared in this scope
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/odometer_base.h:155:44: note: suggested alternative:
/opt/ros/groovy/include/tf/LinearMath/Quaternion.h:406:1: note: ‘tf::angle’
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp: In member function ‘void viso2_ros::MonoOdometer::imageCallback(const ImageConstPtr&, const CameraInfoConstPtr&)’:
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:105:9: error: ‘btMatrix3x3’ was not declared in this scope
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:105:21: error: expected ‘;’ before ‘rot_mat’
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:109:9: error: ‘btVector3’ was not declared in this scope
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:109:19: error: expected ‘;’ before ‘t’
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:110:39: error: ‘rot_mat’ was not declared in this scope
/home/tienthanh/workspace/ros/groovy/mebios/nttputus/viso2/viso2_ros/src/mono_odometer.cpp:110:48: error: ‘t’ was not declared in this scope
make[2]: *** [CMakeFiles/mono_odometer.dir/src/mono_odometer.cpp.o] Error 1
make[1]: *** [CMakeFiles/mono_odometer.dir/all] Error 2
make: *** [all] Error 2

I use ROS groovy, ubuntu 12.04. That could come from the porting of LinearMath from bullet to tf http://www.ros.org/wiki/bullet/ChangeList.

fatal error: viso2_ros/VisoInfo.h: No such file or directory

Hi,
I am trying to build libviso in ROS Indigo in my catkin workspace. During build, I keep getting the error :
/home/raman/catkin_ws/src/dlut_vision/dlut_viso2/dlut_viso2_ros/src/stereo_odometer.cpp:10:32: fatal error: viso2_ros/VisoInfo.h: No such file or directory
#include < viso2_ros/VisoInfo.h >

Indeed when I checked the location, there is no VisoInfo.h file . Why is it then included in the stereo_odometer.cpp and mono_odometer.cpp also ?

Camera frames confusion

Hi,

I am completely confused with the static transformations required to make Viso2 work with my setup, any help will be greatly appreciated.

In the wiki says: "The coordinate frame of the camera is expected to be the optical frame, which means x is pointing right, y downwards and z from the camera into the scene.".

I have a ground robot (robotino), the base_link X axis points forward, the base_link Y axis left and the base_link Z axis points upwards and I have a stereo camera installed on the robot looking exactly to the front.

I configured a static transformation between my camera (stereo_cam) and the base_link of my robot (base_link) using only a rotation so the Z axis of the camera frame points forward, the X axis right and the Y axis downwards. I also enabled in the Viso2 parameters the publishing of the /odom to base_link transformation.

However, with this configuration the base_link doesn't move in the x-y plane of the odom frame as I expected but instead is floating around.

Now, watching the youtube video in viso2_ros wiki I noticed that the X axis (the red one) on the /stereo_dow frame is pointing down... shouldn't it be oriented with the Z axis (the blue one) looking down instead?.

Thanks in advance.

Works great with recorded images, but not with live...

Hi, I have a realsense r200 camera, and am using it with Stereo Odometry.

The images are converted to cv::mat like this:

Mat lFrame(480, 640, CV_16U, (uchar *)dev->get_frame_data(rs::stream::infrared));

which looks fine, and is from the docs.

When i save images out using:

cv::imwrite(name, img);

and run them through the libviso by loading the images, like this:

cv::Mat left_img = cv::imread(left_img_file_name.c_str(), 0); // load as black and white image
uint8_t* left_img_data = left_img.data;

Everything works great. BUT, when i use a live stream of images, like this:

Mat lFrame(480, 640, CV_16U, (uchar *)dev->get_frame_data(rs::stream::infrared));
uint8_t* left_img_data = lFrame.data

It never succeeds at:

if (viso.process(left_img_data,right_img_data,dims)) {

and jumps straight to:

} else {
        cout << " ... failed!" << endl;
      }

I suspect this is due to a mismatch in Mat types, but I am lost.

I have tried:


cv::cvtColor(lFrame, leftImg, CV_GRAY2RGB);

cv::cvtColor(lFrame, leftImg, CV_GRAY2BGR);

But I see no difference. Does anyone have any thoughts on what might be missing here?

running very slow...

Hi, I have libviso2 running on windows, and it works, BUT is very very slow. Around 3 fps.

I know you just built the ROS port, but do you have any thoughts on why this would be? Is it built to run on the GPU or CPU?

Thanks!

E-mail incorrect?

I've sent an e-mail to your account but the user invalid error was returned.
So I posted the mail in here.

Hi.
I've seen your libviso2 ROS package on ros.org.
I really would like to run that on my computer, before doing so, I
have following questions.

  • Compilation instruction for this package? : Could you provide brief
    compilation instruction for
    this ros package?
  • Kitti dataset : Have you worked with Kitti dataset with this ROS
    package? Do you know how to
    provide existing image sequences to this ros package?
    • launch file : Can you provide some example launch file for real
      stereo system?

-Hyon

Memory performance

Hi guys!
I've been using libviso2 for a while. Recently I found that the ROS wrapper is memory consuming. The memory increased continuously when I ran viso2_ros. But to my knowledge, visual odometer should not behave like this. Is it storing something else? Do any body meet this problem?

Problem solved. Do not use opencv3.

Better behavior when the base to sensor transform is not available

Hi everyone,

Current odometer implementation may break the estimated odometry if the transfom from base frame to camera frame becomes unavailable for a short period of time.

The problem is here. I suggest to fix that by using the last available transform in case when the transform for requested timestamp is not available.

My suggestion is based on the assumption that most of the robotic applications use cameras with static, non-identity base-to-camera transforms, therefore in case of TF failure it is better to assume that the transform was not changed.

Suggested patch:

@@ -123,12 +123,14 @@ protected:
     }
     else
     {
-      ROS_WARN_THROTTLE(10.0, "The tf from '%s' to '%s' does not seem to be available, "
-                              "will assume it as identity!", 
+      ROS_WARN_THROTTLE(10.0, "The tf from '%s' to '%s' does not seem to be available", 
                               base_link_frame_id_.c_str(),
                               sensor_frame_id_.c_str());
       ROS_DEBUG("Transform error: %s", error_msg.c_str());
-      base_to_sensor.setIdentity();
+      tf_listener_.lookupTransform(
+          base_link_frame_id_,
+          sensor_frame_id_,
+          ros::Time(0), base_to_sensor);
     }

Odometry translation problem using viso2_ros with zed_ros_wrapper

Hi guys,
I am working on a project using a ZED stereo camera with viso2_ros to be able to visualize and use the odometry. I use viso2_ros for the odometry, and run it with the zed_wrapper_node. When I visualize the odometry message (with rviz), the rotation of the camera is detected (though a bit slow), but no translation is detected/shown...
I thought the problem might come from the fact two odometries are interfering (ZED one, and viso2 one), but the problem remains when I disable zed odometry (in the zed_wrapper_node)! Plus I tried using new and old version of the node!

Anyone has had an odometry translation problem before?
Maybe I'm not disabling the zed odometry correctly?

here's my launch file in case:
zed_stereo_odom (copy).txt

Odometry lost with the demo.launch on Odroid XU3

Hi!

I am using 2 PS3Eye cameras with an Odroid XU3, , and trying to set up a stereo SLAM system.
Ros: Jade
OS:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"

I have the cameras calibrated, and trying to use this launch file with the .bag file:
demo.txt

The nodes are starting alright, but then there is a warning:

[ WARN] [1462709099.361691273, 1428920106.055393073]: The tf from '/stereo_forward' to 'stereo_forward_optical' does not seem to be available, will assume it as identity!
[ WARN] [1462709101.083500454, 1428920107.774632973]: Visual Odometer got lost!
[ WARN] [1462709109.506094252, 1428920116.197964517]: The tf from '/stereo_forward' to 'stereo_forward_optical' does not seem to be available, will assume it as identity!
[ WARN] [1462709109.514676635, 1428920116.208092478]: [stereo_processor] Low number of synchronized left/right/left_info/right_info tuples received.
Left images received:       51 (topic '/stereo_forward/left/image_rect')
Right images received:      20 (topic '/stereo_forward/right/image_rect')
Left camera info received:  72 (topic '/stereo_forward/left/camera_info')
Right camera info received: 72 (topic '/stereo_forward/right/camera_info')
Synchronized tuples: 12

The messages I am expecting (info with rostopic hz):

odroid@odroid:~/catkin_ws/src/viso2/viso2_ros/launch$ rostopic hz stereo_forward/left/image_rect
WARNING: topic [/stereo_forward/left/image_rect] does not appear to be published yet
subscribed to [/stereo_forward/left/image_rect]
average rate: 13.414
    min: 0.020s max: 0.149s std dev: 0.04321s window: 8
average rate: 11.263
    min: 0.020s max: 0.149s std dev: 0.03633s window: 18
average rate: 10.538
    min: 0.020s max: 0.153s std dev: 0.03613s window: 28
average rate: 10.519
    min: 0.020s max: 0.153s std dev: 0.03145s window: 38
average rate: 10.414
    min: 0.020s max: 0.153s std dev: 0.02887s window: 49
average rate: 10.346
    min: 0.020s max: 0.153s std dev: 0.02787s window: 59
^Caverage rate: 10.292
    min: 0.020s max: 0.153s std dev: 0.02768s window: 61
odroid@odroid:~/catkin_ws/src/viso2/viso2_ros/launch$ rostopic hz stereo_forward/right/image_rect
subscribed to [/stereo_forward/right/image_rect]
average rate: 10.202
    min: 0.077s max: 0.128s std dev: 0.01657s window: 10
average rate: 10.032
    min: 0.077s max: 0.128s std dev: 0.01253s window: 19
average rate: 10.013
    min: 0.077s max: 0.128s std dev: 0.01253s window: 29
average rate: 9.989
    min: 0.077s max: 0.128s std dev: 0.01195s window: 39
average rate: 10.033
    min: 0.077s max: 0.128s std dev: 0.01097s window: 50
average rate: 9.989
    min: 0.077s max: 0.128s std dev: 0.01049s window: 59
^Caverage rate: 9.990
    min: 0.077s max: 0.128s std dev: 0.01035s window: 63
odroid@odroid:~/catkin_ws/src/viso2/viso2_ros/launch$ rostopic hz stereo_forward/disparity       
subscribed to [/stereo_forward/disparity]
average rate: 1.407
    min: 0.711s max: 0.711s std dev: 0.00000s window: 2
average rate: 1.481
    min: 0.640s max: 0.711s std dev: 0.03543s window: 3
average rate: 2.126
    min: 0.061s max: 0.711s std dev: 0.29106s window: 4
average rate: 1.371
    min: 0.061s max: 1.561s std dev: 0.47952s window: 6
average rate: 1.384
    min: 0.061s max: 1.561s std dev: 0.43804s window: 7
average rate: 1.401
    min: 0.061s max: 1.561s std dev: 0.38032s window: 9
^Caverage rate: 1.414
    min: 0.061s max: 1.561s std dev: 0.35908s window: 10

The odometer info:

rostopic echo /stereo_odometer/info
WARNING: no messages received and simulated time is active.
Is /clock being published?
header: 
  seq: 0
  stamp: 
    secs: 1428920105
    nsecs: 533951044
  frame_id: ''
got_lost: False
change_reference_frame: True
motion_estimate_valid: False
num_matches: 33
num_inliers: 8
runtime: 0.749203422

---
header: 
  seq: 1
  stamp: 
    secs: 1428920106
    nsecs: 333866835
  frame_id: ''
got_lost: True
change_reference_frame: True
motion_estimate_valid: False
num_matches: 8
num_inliers: 3
runtime: 1.021833571

---

Can someone help me with this? Is this a problem with the computational power of the Odroid XU3? Or should the odometer work with the provided data?
Thanks!

Catkinize and release for groovy

The package should be catkinized and released for groovy.

  1. We should rename the branches:

    master->fuerte
    develop->fuerte-devel

  2. Branch off new branches groovy and groovy-devel

  3. Catkinize groovy-devel branch and merge to groovy when finished

  4. Point default branch (HEAD) to groovy

  5. Update rosdistro files for fuerte and groovy.

Problem running viso2_ros

Hello,

When running stereo_odometer, it appears a second robot blinking and with different position and orientation in rviz. That robot also appears in robot camera image, appears a second LaserScan and CostMap. I am not understanding why, and if you could help me I would be very grateful.

Feature Request - ROI suppression

Greetings,

I have an application where one robot is observing another robot directly, but I would like to mix in stereo odometry, but with suppression of features in the general region of the observed robot. I an dig through your code and try hack it in myself, but I figured I would at least ask to see if this is in your pipeline for future releases.

Thank you,
Benjamin

Testing launch file?

Hi.
I think you are currently running this ros package.
If this is the case, there must be a launch file to run this package.
Could you share your launch file?

unrecognized command line option '-msse3'

hi, I am trying to use this package in my odriod.

ROS version : indigo
OS: Ubuntu 14.04

I have downloaded package in my own workspace.

When I run 'catkin_make', there are some errors as below:

Could you help to fix this problem? Thx

Scanning dependencies of target viso2
[ 0%] [ 0%] [ 0%] [ 0%] [ 0%] Built target topic_tools_generate_messages_cpp
[ 0%] Built target std_msgs_generate_messages_cpp
Built target sensor_msgs_generate_messages_lisp
Built target nav_msgs_generate_messages_py
Built target roscpp_generate_messages_cpp
Built target sensor_msgs_generate_messages_py
[ 7%] [ 14%] [ 21%] Building CXX object viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/filter.cpp.o
[ 21%] [ 28%] Building CXX object viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/matcher.cpp.o
[ 28%] Built target _viso2_ros_generate_messages_check_deps_VisoInfo
Building CXX object viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/matrix.cpp.o
Building CXX object viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/reconstruction.cpp.o
g++-4.8.real: error: unrecognized command line option '-msse3'
Built target roscpp_generate_messages_py
make[2]: *** [viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/filter.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
g++-4.8.real: error: unrecognized command line option '-msse3'
make[2]: *** [viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/matcher.cpp.o] Error 1
[ 28%] g++-4.8.real: error: unrecognized command line option '-msse3'
make[2]: *** [viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/matrix.cpp.o] Error 1
Built target roscpp_generate_messages_lisp
g++-4.8.real: error: unrecognized command line option '-msse3'
make[2]: *** [viso2/libviso2/CMakeFiles/viso2.dir/libviso2/src/reconstruction.cpp.o] Error 1
make[1]: *** [viso2/libviso2/CMakeFiles/viso2.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 28%] Built target nav_msgs_generate_messages_lisp
[ 28%] Built target actionlib_msgs_generate_messages_cpp
[ 28%] [ 28%] Built target tf_generate_messages_py
[ 28%] [ 28%] Built target actionlib_msgs_generate_messages_lisp
Built target rosgraph_msgs_generate_messages_py
Built target rosgraph_msgs_generate_messages_cpp
[ 28%] Built target nodelet_generate_messages_lisp
make: *** [all] Error 2
Invoking "make -j8 -l8" failed

Distance between two cameras for stereo setup

I've been hunting through the parameters on this and cannot seem to find how far I should place my two cameras for a stereo setup. Can this parameter be changed? If not what is the value used?

Thanks!

Using viso2 for monocular odometery

Hello,

I am trying to run monocular odometer for viso2 and I get the following error :

[ WARN] [1435082098.253781281]: mono_odometer needs rectified input images. The used image topic is 'image'. Are you sure the images are rectified?
[ INFO] [1435082098.264686737]: Basic Odometer Settings:
odom_frame_id = /odom
base_link_frame_id = /base_link
publish_tf = true
[ WARN] [1435082098.271895900]: Parameter 'camera_height' is required but not set. Using default: 1.000000
[ WARN] [1435082098.272378074]: Paramter 'camera_pitch' is required but not set. Using default: 0.000000
[ INFO] [1435082098.491839579]: Initialized libviso2 mono odometry with the following parameters:
Calibration parameters:
f = 0
cu = 0
cv = 0
Matcher parameters:
nms_n = 3
nms_tau = 50
match_binsize = 50
match_radius = 200
match_disp_tolerance = 2
outlier_disp_tolerance = 5
outlier_flow_tolerance = 5
multi_stage = 1
half_resolution = 1
refinement = 1
Bucketing parameters:
max_features = 2
bucket_width = 50
bucket_height = 50
Mono odometry parameters:
camera_height = 1
camera_pitch = 0
ransac_iters = 2000
inlier_threshold = 1e-05
motion_threshold = 100

[ WARN] [1435082098.510712066]: The tf from '/base_link' to 'usb_cam' does not seem to be available, will assume it as identity!
ERROR: Cannot get submatrix [3..3] x [0..-1] of a (0x0) matrix.

Terminal 3: roslaunch usb_cam.launch
The usb_cam node is publishing the images on the /camera/image_raw topic.

My camera calibration parameters are all 0. Is that the problem?

Can somebody help me with this?
Thanks

package interferes with pointgrey cameras

I've spent that last week trying to figure out why my pointgrey cameras stopped publishing their camera_info topics. It happens immediately after installing this package.

I'll be happy to supply config info, but the short story is this:
ubuntu 14.04.5
ros indigo
pointgrey-camera-driver (2x chameleon cm3-u3-13s2c)

The whole pipeline that I follow to install (sanitized for paths and licenses)
pipeline.txt

motion_estimate_valid

Why the infomsg's field motion_estimate_valid isn't modified never during the program running?
The same problem for header.frame_id.

VisoInfo info_msg;
info_msg.header.stamp = image_msg->header.stamp;
info_msg.got_lost = !success;
info_msg.change_reference_frame = false;
info_msg.num_matches = visual_odometer_->getNumberOfMatches();
info_msg.num_inliers = visual_odometer_->getNumberOfInliers();
ros::WallDuration time_elapsed = ros::WallTime::now() - start_time;
info_msg.runtime = time_elapsed.toSec();
info_pub_.publish(info_msg);

thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.