Giter VIP home page Giter VIP logo

Comments (13)

ZacharyTaylor avatar ZacharyTaylor commented on May 30, 2024 1

You can start an additional node that will republish compressed images in an uncompressed format:
<node name="republish" type="republish" pkg="image_transport" output="screen" args="compressed in:=/axis/image_raw/compressed raw out:=/axis/image_raw" />

Note however that any artifacts the compression process induced will still be present and this will detrimentally impact the quality of the tracking and localization.

from maplab.

schneith avatar schneith commented on May 30, 2024 1
I was wondering what else I can debug to ensure that ROVIOLI can receive the imu data and image_raw data from playing the rosbag with the roslaunch file and live script running.

If you enable verbose level >=3 (by adding the argument -v=3) you should see the output of the following lines:
https://github.com/ethz-asl/maplab/blob/master/applications/rovioli/src/datasource-rosbag.cc#L163
https://github.com/ethz-asl/maplab/blob/master/applications/rovioli/src/datasource-rosbag.cc#L178

If you see those outputs then the sensor data is fed properly to the system and the issue must lie somewhere else.

from maplab.

mfehr avatar mfehr commented on May 30, 2024

Hi @kashmoneygt

Thank you for the report. There are several crucial steps missing before you can run ROVIOLI on your dataset:

  • Using the euroc calibration on a different datasets/sensor won't work. You will need to modify the camera and imu calibration based on the information you get from the dataset. If you have access to the sensor or a calibration dataset, you should probably use kalibr to get the calibration files.
  • You need to set the ROS topics in the camera and imu calibration files:
    • Modify this line in ncamera-euroc.yaml to set the camera topic:
      - camera:
            label: cam0
      
      You might need to modify this flag to get the right ROS topic suffix, the default is <topic>/image_raw:
      --vio_camera_topic_suffix
      
    • Adapt this line in the imu-.yaml file to set the imu topic:
      hardware_id: imu0
      
  • The dataset you are using has a pan-tilt camera. If the IMU is not on the moving camera the extrinsic calibration between camera and IMU is not constant. ROVIO does have the capability to estimate this calibration online, i.e. enabling it to start from a bad initial calibration, but I doubt it will be able to keep up if the extrinsics are changed rapidly. You would need to extend ROVIO(LI) to subscribe to the pan/tilt information you receive from the robot and adapt the calibration accordingly.
  • Use the live script, so you can see if something is happening or not.
  • You might not be able to just pass the color images to ROVIOLI, you might need to convert them to grayscale first. I'm not sure, we haven't tested this to the best of my knowledge.

from maplab.

ZacharyTaylor avatar ZacharyTaylor commented on May 30, 2024

Also just adding to this I noticed your IMU is only running at 20 Hz, its possible this will work but I have never seen someone run rovio with an IMU rate of less than 100 Hz

from maplab.

kashmoneygt avatar kashmoneygt commented on May 30, 2024

Thanks for all that useful information.
I made the following changes accordingly.

The images are on the rostopic /axis/image_raw; however, they are compressed, as you can see in the picture of all rostopics above.
Is there any sort of parameter I can pass into ROVIOLI for the transport type to be compressed? I do not have access to the original image_raw topic.

If not, then may I get some help on how I can input frames of the images into ROVIOLI rather than rosbags?

Thank you once again.

from maplab.

mfehr avatar mfehr commented on May 30, 2024

Unfortunately there is no parameter in ROVIOLI to account for compressed images, you will have to either do what @ZacharyTaylor suggested or convert your ROS data prior to feeding it to ROVIOLI (e.g. have a look here) or change the interfaces (DataSourceRosbag or DataSourceRosTopic).

If you would like to add a new way to input data to ROVIO, you should have a look at the datasource system in ROVIOLI. It consists of a parent class (link) and two descendants for inputting live ROS data (link) or reading rosbags (link). You could add a third option that parses the data from a specific format from the file system. Then you will need to extend the DataSourceType and the factory function to create your own datasource if you set the gflag accordingly (--datasource_type, defined here). The DataSourceFlow creates one of the DataSource classes (rostopic, rosbag, or e.g. your version) based on the gflag and then starts streaming the data.

from maplab.

kashmoneygt avatar kashmoneygt commented on May 30, 2024

I appreciate the help in getting started from both of you.

I created an uncompress_img.launch launch file:
<launch> <node name="republish" type="republish" pkg="image_transport" output="screen" args="compressed in:=/axis/image_raw _image_transport:=compressed raw out:=image_raw" /> </launch>
When I created the launch file using what @ZacharyTaylor specified, I got the following warning, so I updated the .launch file as above.
[ WARN] [1519133772.177988434]: [image_transport] It looks like you are trying to subscribe directly to a transport-specific image topic '/axis/image_raw/compressed', in which case you will likely get a connection error. Try subscribing to the base topic '/axis/image_raw' instead with parameter ~image_transport set to 'compressed' (on the command line, _image_transport:=compressed). See http://ros.org/wiki/image_transport for details.

I created a topic_to_map scripts and placed it alongside the other scripts:

``
#!/usr/bin/env bash
LOCALIZATION_MAP_OUTPUT=$1
NCAMERA_CALIBRATION="$ROVIO_CONFIG_DIR/camera_calibration.yaml"
IMU_PARAMETERS_MAPLAB="$ROVIO_CONFIG_DIR/imu-adis16488.yaml"
IMU_PARAMETERS_ROVIO="$ROVIO_CONFIG_DIR/imu-sigmas-rovio.yaml"
REST=$@

rosrun rovioli rovioli
--alsologtostderr=1
--v=2
--ncamera_calibration=$NCAMERA_CALIBRATION
--imu_parameters_maplab=$IMU_PARAMETERS_MAPLAB
--imu_parameters_rovio=$IMU_PARAMETERS_ROVIO
--datasource_type="rostopic"
--save_map_folder="$LOCALIZATION_MAP_OUTPUT"
--map_builder_save_image_as_resources=false
--optimize_map_to_localization_map=false $REST
``

I did not specify a --vio_camera_topic_suffix="" flag because I am republishing to the default topic (image_raw).

In terminal 1, I ran roslaunch ~/uncompress_img.launch which ran roscore and the image_transport pkg successfully.
image
In terminal 2, I ran rosrun rovioli topic_to_map ~/bags/maps/.

In terminal 3, I played the rosbag.

However, rovioli says that it has not received and camera messages or imu messages every 5 seconds.
image

I checked to make sure that the raw images were being published to image_raw by playing the rosbag and running rostopic echo image_raw.
I also checked to make sure that all filepaths are found. If I Crtl+C the ROVIOLI process, it informs me that it did not find a specific filepath, but I fixed all filepath errors.

I was wondering what else I can debug to ensure that ROVIOLI can receive the imu data and image_raw data from playing the rosbag with the roslaunch file and live script running.

Also @mfehr I will most definitely look into adding a third option. Thanks for all the info on how I can go about doing that!

from maplab.

mfehr avatar mfehr commented on May 30, 2024
  • Could you please post your current calibration files somewhere, so we can have a look? The ncamera and both imu files.
  • Add some output here to log which topics ROVIOLI is subscribing to. This actually would be a good idea to have anyways, so feel free to open a PR for it.
  • You can use rqt_graph to check the publisher/subscriber connections between the nodes.
  • There is a non-zero chance that ROVIOLI is discarding your messages for some reason (e.g. because of faulty timestamps, low IMU frequency, etc. ...), by adding log messages here and here at the beginning of the function bodies, you will be able to observe if any ROS messages make it into ROVIOLI at all.

from maplab.

kashmoneygt avatar kashmoneygt commented on May 30, 2024

Hello,

I am having trouble visualizing the map in maplab_console once I have created the map with ROVIOLI.
Side note: thank you for all the tips and help you gave me in creating the map. I'm very much a beginner, but your advice has helped me out very much.

I created the map with the following:
Terminal1: running the roslaunch file that subscribes to compressed img topic and broadcasts raw img topic
$ roslaunch ~/uncompress_img.launch

File contents of uncompress_img.launch:
image

Terminal2: running the rovioli live script
$ rosrun rovioli topic_to_map ~/bags/maps/

contents of topic_to_map:
image

Terminal3: playing the rosbags
$ rosbag play -s 120 *.bag

Notes:
total of 103 rosbags
each bag contains 30 to 43 seconds of recording
the rosbags are continuous recordings of an unmanned vessel moving around a lake automatically, stays 10 meters away from shoreline
skipping the first 120s of the bags b/c it consists of getting the vessel ready

rosbag info .bag output:
image

ROVIOLI is successfully subscribing to the rosbag's broadcasted messages:
image

Creation of the map in ROVIOLI looks like the following:
image


Once I try to visualize the map via maplab_console and rviz, it does not display properly.
I am assuming this is because of my lack of understanding of rviz, so I consulted wikis and forums on rviz help but was not able to get anywhere.

Terminal1: after maplab_console successfully starts
$ load --map_folder ~/bags/maps/_02 $ visualize

Terminal2: running rviz
rosrun rviz rviz

I followed the steps from here: https://github.com/ethz-asl/maplab/wiki/Console-map-management
"On the left side, click the Add button and select the data you want to visualize. In the tab By topic, select Marker from /vi_map_edges/viwls and PointCloud2 from vi_map_landmarks, should give you a view similar to this:"

However, my view looks like the following:
Closeup:
image

Little bit zoomed out:
image

Zoomed out very far:
image

I read up on fixed frames, and I know that I should select 'map' from the two options I'm given: 'map' and 'camera_fe01'

My question is: what can I do so that the map I created can possibly resemble the lake that the unmanned vehicle went around?

from maplab.

mfehr avatar mfehr commented on May 30, 2024

Hi,

So first of all the problem you have is certainly unrelated to visualizing the map, because from your screenshot from rovio(li) it is evident that the estimation itself is not working. No estimation, no meaningful map to visualize. So if you want to debug this you need to focus on getting rovio working. This means looking at the ROVIO window and at the tf and stamped transforms in RVIZ while ROVIOLI is running. Increase the verbosity level to get more output of rovio as @schneith suggested. Since image and imu are now connected to rovio the problem lies elsewhere. In our posts above we already listed the most likely problems:

  • Calibration
    • the camera calibration yaml needs to be adapted to your camera, preferably by properly calibrating it using kalibr, or alternatively by manually changing the values to whatever values the dataset provides (website, CameraInfo topic, ... ).
    • This includes the camera extrinsics, i.e. the transformation between IMU and camera. If this is not constant because of your pan-tilt camera you will need to significantly modify ROVIOLI to adapt or estimate it on the fly, this is not supported!
    • The IMU sigmas need to match the model of your IMU, you can't use the once from euroc.
  • The imu frequency is very low for this dataset as @ZacharyTaylor pointed out, we are not sure ROVIO works with this frequency.

Please also post your current calibration files (camera, imu maplab, imu rovio) as suggested above.

from maplab.

kashmoneygt avatar kashmoneygt commented on May 30, 2024

Thank you for the quick response.
Sorry that I did not post the calibration files from when I was previously asked.
The transformation between the camera and the imu is constant. The imu sensor is situated 10 cm below and and 5 cm left of the camera, but I am not sure how to translate these measurements to the camera calibration file.

camera:
image
imu maplab:
image
imu rovio:
image

I originally had a problem running kalibr a few weeks back, but I will try it once again.

The problem:
kpatel2@gtlpc101:~/Downloads/kalibr-cde$ ./cde-exec kalibr_calibrate_imu_camera --bag ~/bags/2015-07-23-09-44-08_50.bag --cam ~/camera_calib/camchain.yaml --imu ~/camera_calib/imu.yaml --target ~/camera_calib/checkerboard.yaml
importing libraries
Initializing IMUs:
Update rate: 20.0
Accelerometer:
Noise density: 0.00186
Noise density (discrete): 0.0083181728763
Random walk: 0.000433
Gyroscope:
Noise density: 0.000187
Noise density (discrete): 0.000836289423585
Random walk: 2.66e-05
Initializing imu rosbag dataset reader:
Dataset: /home/GTL/kpatel2/bags/2015-07-23-09-44-08_50.bag
Topic: /imu/data
Number of messages: 878
Reading IMU data (/imu/data)
Read 878 imu readings over 44.3 seconds
Initializing calibration target:
Type: checkerboard
Rows
Count: 7
Distance: 0.06 [m]
Cols
Count: 6
Distance: 0.06 [m]
Initializing camera chain:
Camera chain - cam0:
Camera model: pinhole
Focal length: [461.629, 460.152]
Principal point: [362.68, 246.049]
Distortion model: radtan
Distortion coefficients: [-0.27695497, 0.06712482, 0.00087538, 0.00011556]
baseline: no data available
Initializing camera rosbag dataset reader:
Dataset: /home/GTL/kpatel2/bags/2015-07-23-09-44-08_50.bag
Topic: /axis/camera_info
Number of images: 444
Extracting calibration target corners
Traceback (most recent call last):
File "/var/kalibr-build/devel/bin/kalibr_calibrate_imu_camera", line 5, in
exec(fh.read())
File "", line 206, in
File "", line 149, in main
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 391, in init
showOneStep=parsed.extractionstepping) )
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 59, in init
self.targetObservations = kc.extractCornersFromDataset(self.dataset, self.detector, multithreading=multithreading)
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset
raise RuntimeError("Exception during multithreaded extraction: {0}".format(e))
RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CameraInfo' object has no attribute 'encoding'
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
ImportError: No module named apport.fileutils

Original exception was:
Traceback (most recent call last):
File "/var/kalibr-build/devel/bin/kalibr_calibrate_imu_camera", line 5, in
exec(fh.read())
File "", line 206, in
File "", line 149, in main
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 391, in init
showOneStep=parsed.extractionstepping) )
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 59, in init
self.targetObservations = kc.extractCornersFromDataset(self.dataset, self.detector, multithreading=multithreading)
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset
raise RuntimeError("Exception during multithreaded extraction: {0}".format(e))
RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CameraInfo' object has no attribute 'encoding'

Thanks again.

from maplab.

mfehr avatar mfehr commented on May 30, 2024

So based on your calibration files, you adapted only the camera intrinsics so far:

  • The camera to imu transformation is this T_B_C matrix in the camera yaml (transfroms points in camera coordinates to Base/Imu coordinates)
  • The IMU sigmas you will need to adapt to whatever IMU the dataset is using, you should find the values in the manufactures specs. For the rovio values you can use the squared values of maplab to get started.

Getting this IMU to camera transformation you will need a proper calibration dataset of your sensor, which I assume you have, since you tried using kalibr. So I assume the public dataset you're using does not provide this transformation already? Have you checked the tf tree if there is an IMU frame?

The error you got is at least claiming the error is that your CameraInfo message is missing the encoding field, not sure though that is the real reason, but you should check.

from maplab.

SYSUHPK avatar SYSUHPK commented on May 30, 2024

@kashmoneygt
Have you solved your problem? I also encountered a similar problem to yours. I ran the following command:

Kalibr_calibrate_cameras - bag/home/nvidia/kalibr_workspace/output. The bag - switchable viewer/android/image_raw/compressed - models pinhole - equi - target/home/nvidia/kalibr_workspace/ap yaml

But it reported an error:

importing libraries
Initializing cam0:
Camera model: pinhole-equi
Dataset: /home/nvidia/kalibr_workspace/output.bag
Topic: /android/image_raw/compressed
Number of images: 877
Extracting calibration target corners
Traceback (most recent call last):
File "/home/nvidia/kalibr_workspace/devel/bin/kalibr_calibrate_cameras", line 15, in
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 447, in
main()
File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 178, in main
noTransformation=True)
File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset
raise RuntimeError("Exception during multithreaded extraction: {0}".format(e))
RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CompressedImage' object has no attribute 'encoding'
What should I do?

from maplab.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.