This repo contains a simple ROS node and corresponding launch file to produce stereo pointclouds, and transform those pointclouds into the frame of an SL sensor.
This launch file crops input images by four (to help with processing speed), feeds the cropped images into the standard ROS node stereo_image_proc to produce stereo pointclouds, while also running the node pointcloud_fusion, which publishes the static extrinsic tf transform between the SL sensor frame and the stereo frame and transforms the stereo pointcloud into the SL sensor's frame.
Seikowave extrinsics are located at cfg/extrinsics/stereo_Seikowave_extrinsics.yaml
- camera/stereo_image_proc
Camera stereo processing - pointcloud_fusions
Publish extrinsic /tf information, transform stereo pointcloud to SL frame - camera/left/crop_decimate
Left image image crop - camera/right/crop_decimate
Right image image crop
The launch file has been configured to work with pointgrey cameras and a seikowave SL sensor. As such, the launch file as currently configured, have the following input output structure:
- /camera/left/image_raw (sensor_msgs/Image)
Left input image - /camera/right/image_raw (sensor_msgs/Image)
Right input image - /camera/left/camera_info (sensor_msgs/CameraInfo)
Left camera info - /camera/right/camera_info (sensor_msgs/CameraInfo)
Right camera info - /seikowave_node/cloud (sensor_msgs/PointCloud2)
SL pointcloud input
- /camera/points2 (sensor_msgs/PointCloud2)
Stereo camera pointcloud output at frame rate - /fusion/stereo/cloud (sensor_msgs/PointCloud2)
Stereo camera pointcloud output synced with SL sensor at lowest framerate - /fusion/SL/cloud (sensor_msgs/PointCloud2)
SL sensor pointcloud output synced with stero pointcloud at lowest framerate - /tf
Publishes the transform between the stereo pointcloud frame (default: head_stereo/left_optical_frame) and seikowave pointcloud frame (default: seikowave/optical_frame)
To install, clone to a catkin_ws and compile using either catkin build or catkin_make